00:00:00.001 Started by upstream project "autotest-per-patch" build number 120539 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21499 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.176 > git --version # 'git version 2.39.2' 00:00:00.176 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/1 # timeout=5 00:00:05.436 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.446 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.458 Checking out Revision 77e645413453ce9660898a799e28995c970fadc7 (FETCH_HEAD) 00:00:05.458 > git config core.sparsecheckout # timeout=10 00:00:05.467 > git read-tree -mu HEAD # timeout=10 00:00:05.482 > git checkout -f 77e645413453ce9660898a799e28995c970fadc7 # timeout=5 00:00:05.501 Commit message: "jobs/autotest-upstream: Enable ASan UBSan on all jobs" 00:00:05.501 > git rev-list --no-walk 34845be7ae448993c10fd8929d8277dc075ec12e # timeout=10 00:00:05.592 [Pipeline] Start of Pipeline 00:00:05.610 [Pipeline] library 00:00:05.612 Loading library shm_lib@master 00:00:05.612 Library shm_lib@master is cached. Copying from home. 00:00:05.626 [Pipeline] node 00:00:20.627 Still waiting to schedule task 00:00:20.628 Waiting for next available executor on ‘vagrant-vm-host’ 00:08:56.530 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:56.533 [Pipeline] { 00:08:56.546 [Pipeline] catchError 00:08:56.548 [Pipeline] { 00:08:56.563 [Pipeline] wrap 00:08:56.573 [Pipeline] { 00:08:56.582 [Pipeline] stage 00:08:56.584 [Pipeline] { (Prologue) 00:08:56.608 [Pipeline] echo 00:08:56.609 Node: VM-host-SM17 00:08:56.618 [Pipeline] cleanWs 00:08:56.628 [WS-CLEANUP] Deleting project workspace... 00:08:56.629 [WS-CLEANUP] Deferred wipeout is used... 00:08:56.633 [WS-CLEANUP] done 00:08:57.133 [Pipeline] setCustomBuildProperty 00:08:57.179 [Pipeline] nodesByLabel 00:08:57.180 Found a total of 1 nodes with the 'sorcerer' label 00:08:57.187 [Pipeline] httpRequest 00:08:57.190 HttpMethod: GET 00:08:57.190 URL: http://10.211.164.101/packages/jbp_77e645413453ce9660898a799e28995c970fadc7.tar.gz 00:08:57.192 Sending request to url: http://10.211.164.101/packages/jbp_77e645413453ce9660898a799e28995c970fadc7.tar.gz 00:08:57.194 Response Code: HTTP/1.1 200 OK 00:08:57.194 Success: Status code 200 is in the accepted range: 200,404 00:08:57.194 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_77e645413453ce9660898a799e28995c970fadc7.tar.gz 00:08:57.331 [Pipeline] sh 00:08:57.604 + tar --no-same-owner -xf jbp_77e645413453ce9660898a799e28995c970fadc7.tar.gz 00:08:57.627 [Pipeline] httpRequest 00:08:57.631 HttpMethod: GET 00:08:57.631 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:57.632 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:57.633 Response Code: HTTP/1.1 200 OK 00:08:57.634 Success: Status code 200 is in the accepted range: 200,404 00:08:57.634 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:59.797 [Pipeline] sh 00:09:00.075 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:09:03.366 [Pipeline] sh 00:09:03.643 + git -C spdk log --oneline -n5 00:09:03.643 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:09:03.643 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:09:03.643 54944c1d1 event: don't NOTICELOG when no RPC server started 00:09:03.643 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:09:03.643 5dc808124 init: add spdk_subsystem_exists() 00:09:03.660 [Pipeline] writeFile 00:09:03.675 [Pipeline] sh 00:09:03.956 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:09:03.968 [Pipeline] sh 00:09:04.248 + cat autorun-spdk.conf 00:09:04.248 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:04.248 SPDK_TEST_NVMF=1 00:09:04.248 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:04.248 SPDK_TEST_URING=1 00:09:04.248 SPDK_TEST_USDT=1 00:09:04.248 SPDK_RUN_ASAN=1 00:09:04.248 SPDK_RUN_UBSAN=1 00:09:04.248 NET_TYPE=virt 00:09:04.248 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:04.254 RUN_NIGHTLY=0 00:09:04.256 [Pipeline] } 00:09:04.272 [Pipeline] // stage 00:09:04.287 [Pipeline] stage 00:09:04.289 [Pipeline] { (Run VM) 00:09:04.302 [Pipeline] sh 00:09:04.581 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:09:04.581 + echo 'Start stage prepare_nvme.sh' 00:09:04.581 Start stage prepare_nvme.sh 00:09:04.581 + [[ -n 4 ]] 00:09:04.581 + disk_prefix=ex4 00:09:04.581 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:09:04.581 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:09:04.581 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:09:04.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:04.581 ++ SPDK_TEST_NVMF=1 00:09:04.581 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:04.581 ++ SPDK_TEST_URING=1 00:09:04.581 ++ SPDK_TEST_USDT=1 00:09:04.581 ++ SPDK_RUN_ASAN=1 00:09:04.581 ++ SPDK_RUN_UBSAN=1 00:09:04.581 ++ NET_TYPE=virt 00:09:04.581 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:04.581 ++ RUN_NIGHTLY=0 00:09:04.581 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:04.581 + nvme_files=() 00:09:04.581 + declare -A nvme_files 00:09:04.581 + backend_dir=/var/lib/libvirt/images/backends 00:09:04.581 + nvme_files['nvme.img']=5G 00:09:04.581 + nvme_files['nvme-cmb.img']=5G 00:09:04.581 + nvme_files['nvme-multi0.img']=4G 00:09:04.581 + nvme_files['nvme-multi1.img']=4G 00:09:04.581 + nvme_files['nvme-multi2.img']=4G 00:09:04.581 + nvme_files['nvme-openstack.img']=8G 00:09:04.581 + nvme_files['nvme-zns.img']=5G 00:09:04.581 + (( SPDK_TEST_NVME_PMR == 1 )) 00:09:04.581 + (( SPDK_TEST_FTL == 1 )) 00:09:04.581 + (( SPDK_TEST_NVME_FDP == 1 )) 00:09:04.581 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:09:04.581 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:09:04.581 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:09:04.581 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:09:04.581 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:09:04.581 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:09:04.581 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:09:04.581 + for nvme in "${!nvme_files[@]}" 00:09:04.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:09:05.518 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:09:05.518 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:09:05.518 + echo 'End stage prepare_nvme.sh' 00:09:05.518 End stage prepare_nvme.sh 00:09:05.530 [Pipeline] sh 00:09:05.811 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:09:05.811 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:09:05.811 00:09:05.811 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:09:05.811 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:09:05.811 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:05.811 HELP=0 00:09:05.811 DRY_RUN=0 00:09:05.811 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:09:05.811 NVME_DISKS_TYPE=nvme,nvme, 00:09:05.811 NVME_AUTO_CREATE=0 00:09:05.811 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:09:05.811 NVME_CMB=,, 00:09:05.811 NVME_PMR=,, 00:09:05.811 NVME_ZNS=,, 00:09:05.811 NVME_MS=,, 00:09:05.811 NVME_FDP=,, 00:09:05.811 SPDK_VAGRANT_DISTRO=fedora38 00:09:05.811 SPDK_VAGRANT_VMCPU=10 00:09:05.811 SPDK_VAGRANT_VMRAM=12288 00:09:05.811 SPDK_VAGRANT_PROVIDER=libvirt 00:09:05.811 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:09:05.811 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:09:05.811 SPDK_OPENSTACK_NETWORK=0 00:09:05.811 VAGRANT_PACKAGE_BOX=0 00:09:05.811 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:09:05.811 FORCE_DISTRO=true 00:09:05.811 VAGRANT_BOX_VERSION= 00:09:05.811 EXTRA_VAGRANTFILES= 00:09:05.811 NIC_MODEL=e1000 00:09:05.811 00:09:05.811 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:09:05.811 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:09.113 Bringing machine 'default' up with 'libvirt' provider... 00:09:10.080 ==> default: Creating image (snapshot of base box volume). 00:09:10.338 ==> default: Creating domain with the following settings... 00:09:10.338 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713433053_ff2bc47601541acb7adb 00:09:10.338 ==> default: -- Domain type: kvm 00:09:10.338 ==> default: -- Cpus: 10 00:09:10.338 ==> default: -- Feature: acpi 00:09:10.338 ==> default: -- Feature: apic 00:09:10.338 ==> default: -- Feature: pae 00:09:10.338 ==> default: -- Memory: 12288M 00:09:10.338 ==> default: -- Memory Backing: hugepages: 00:09:10.338 ==> default: -- Management MAC: 00:09:10.338 ==> default: -- Loader: 00:09:10.338 ==> default: -- Nvram: 00:09:10.338 ==> default: -- Base box: spdk/fedora38 00:09:10.338 ==> default: -- Storage pool: default 00:09:10.338 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713433053_ff2bc47601541acb7adb.img (20G) 00:09:10.338 ==> default: -- Volume Cache: default 00:09:10.338 ==> default: -- Kernel: 00:09:10.338 ==> default: -- Initrd: 00:09:10.338 ==> default: -- Graphics Type: vnc 00:09:10.338 ==> default: -- Graphics Port: -1 00:09:10.338 ==> default: -- Graphics IP: 127.0.0.1 00:09:10.338 ==> default: -- Graphics Password: Not defined 00:09:10.338 ==> default: -- Video Type: cirrus 00:09:10.338 ==> default: -- Video VRAM: 9216 00:09:10.338 ==> default: -- Sound Type: 00:09:10.338 ==> default: -- Keymap: en-us 00:09:10.338 ==> default: -- TPM Path: 00:09:10.338 ==> default: -- INPUT: type=mouse, bus=ps2 00:09:10.338 ==> default: -- Command line args: 00:09:10.338 ==> default: -> value=-device, 00:09:10.338 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:09:10.338 ==> default: -> value=-drive, 00:09:10.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:09:10.338 ==> default: -> value=-device, 00:09:10.338 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:10.338 ==> default: -> value=-device, 00:09:10.338 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:09:10.338 ==> default: -> value=-drive, 00:09:10.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:09:10.338 ==> default: -> value=-device, 00:09:10.338 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:10.338 ==> default: -> value=-drive, 00:09:10.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:09:10.338 ==> default: -> value=-device, 00:09:10.338 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:10.338 ==> default: -> value=-drive, 00:09:10.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:09:10.338 ==> default: -> value=-device, 00:09:10.338 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:10.597 ==> default: Creating shared folders metadata... 00:09:10.597 ==> default: Starting domain. 00:09:12.498 ==> default: Waiting for domain to get an IP address... 00:09:34.430 ==> default: Waiting for SSH to become available... 00:09:34.430 ==> default: Configuring and enabling network interfaces... 00:09:36.334 default: SSH address: 192.168.121.89:22 00:09:36.334 default: SSH username: vagrant 00:09:36.334 default: SSH auth method: private key 00:09:38.866 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:09:46.991 ==> default: Mounting SSHFS shared folder... 00:09:47.927 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:09:47.927 ==> default: Checking Mount.. 00:09:48.863 ==> default: Folder Successfully Mounted! 00:09:48.863 ==> default: Running provisioner: file... 00:09:49.802 default: ~/.gitconfig => .gitconfig 00:09:50.060 00:09:50.060 SUCCESS! 00:09:50.060 00:09:50.060 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:09:50.060 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:09:50.060 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:09:50.060 00:09:50.069 [Pipeline] } 00:09:50.087 [Pipeline] // stage 00:09:50.096 [Pipeline] dir 00:09:50.096 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:09:50.098 [Pipeline] { 00:09:50.112 [Pipeline] catchError 00:09:50.114 [Pipeline] { 00:09:50.127 [Pipeline] sh 00:09:50.406 + vagrant ssh-config --host vagrant 00:09:50.406 + sed -ne /^Host/,$p 00:09:50.406 + tee ssh_conf 00:09:54.648 Host vagrant 00:09:54.648 HostName 192.168.121.89 00:09:54.648 User vagrant 00:09:54.648 Port 22 00:09:54.648 UserKnownHostsFile /dev/null 00:09:54.648 StrictHostKeyChecking no 00:09:54.648 PasswordAuthentication no 00:09:54.648 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:09:54.648 IdentitiesOnly yes 00:09:54.648 LogLevel FATAL 00:09:54.648 ForwardAgent yes 00:09:54.648 ForwardX11 yes 00:09:54.648 00:09:54.662 [Pipeline] withEnv 00:09:54.664 [Pipeline] { 00:09:54.678 [Pipeline] sh 00:09:54.959 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:09:54.959 source /etc/os-release 00:09:54.959 [[ -e /image.version ]] && img=$(< /image.version) 00:09:54.959 # Minimal, systemd-like check. 00:09:54.959 if [[ -e /.dockerenv ]]; then 00:09:54.959 # Clear garbage from the node's name: 00:09:54.959 # agt-er_autotest_547-896 -> autotest_547-896 00:09:54.959 # $HOSTNAME is the actual container id 00:09:54.959 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:09:54.959 if mountpoint -q /etc/hostname; then 00:09:54.959 # We can assume this is a mount from a host where container is running, 00:09:54.959 # so fetch its hostname to easily identify the target swarm worker. 00:09:54.959 container="$(< /etc/hostname) ($agent)" 00:09:54.959 else 00:09:54.959 # Fallback 00:09:54.959 container=$agent 00:09:54.959 fi 00:09:54.959 fi 00:09:54.959 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:09:54.959 00:09:55.230 [Pipeline] } 00:09:55.249 [Pipeline] // withEnv 00:09:55.258 [Pipeline] setCustomBuildProperty 00:09:55.273 [Pipeline] stage 00:09:55.275 [Pipeline] { (Tests) 00:09:55.292 [Pipeline] sh 00:09:55.568 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:09:55.842 [Pipeline] timeout 00:09:55.843 Timeout set to expire in 30 min 00:09:55.844 [Pipeline] { 00:09:55.862 [Pipeline] sh 00:09:56.142 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:09:56.709 HEAD is now at 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:09:56.723 [Pipeline] sh 00:09:57.002 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:09:57.274 [Pipeline] sh 00:09:57.553 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:09:57.568 [Pipeline] sh 00:09:57.848 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:09:57.848 ++ readlink -f spdk_repo 00:09:57.848 + DIR_ROOT=/home/vagrant/spdk_repo 00:09:57.848 + [[ -n /home/vagrant/spdk_repo ]] 00:09:57.848 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:09:57.848 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:09:57.848 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:09:57.848 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:09:57.848 + [[ -d /home/vagrant/spdk_repo/output ]] 00:09:57.848 + cd /home/vagrant/spdk_repo 00:09:57.848 + source /etc/os-release 00:09:57.848 ++ NAME='Fedora Linux' 00:09:57.848 ++ VERSION='38 (Cloud Edition)' 00:09:57.848 ++ ID=fedora 00:09:57.848 ++ VERSION_ID=38 00:09:57.848 ++ VERSION_CODENAME= 00:09:57.848 ++ PLATFORM_ID=platform:f38 00:09:57.848 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:09:57.848 ++ ANSI_COLOR='0;38;2;60;110;180' 00:09:57.848 ++ LOGO=fedora-logo-icon 00:09:57.848 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:09:57.848 ++ HOME_URL=https://fedoraproject.org/ 00:09:57.848 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:09:57.848 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:09:57.848 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:09:57.848 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:09:57.848 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:09:57.848 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:09:57.848 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:09:57.848 ++ SUPPORT_END=2024-05-14 00:09:57.848 ++ VARIANT='Cloud Edition' 00:09:57.848 ++ VARIANT_ID=cloud 00:09:57.848 + uname -a 00:09:57.848 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:09:57.848 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:58.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.417 Hugepages 00:09:58.417 node hugesize free / total 00:09:58.417 node0 1048576kB 0 / 0 00:09:58.417 node0 2048kB 0 / 0 00:09:58.417 00:09:58.417 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:58.417 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:58.417 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:58.417 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:58.417 + rm -f /tmp/spdk-ld-path 00:09:58.417 + source autorun-spdk.conf 00:09:58.417 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:58.417 ++ SPDK_TEST_NVMF=1 00:09:58.417 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:58.417 ++ SPDK_TEST_URING=1 00:09:58.417 ++ SPDK_TEST_USDT=1 00:09:58.417 ++ SPDK_RUN_ASAN=1 00:09:58.417 ++ SPDK_RUN_UBSAN=1 00:09:58.417 ++ NET_TYPE=virt 00:09:58.417 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:58.417 ++ RUN_NIGHTLY=0 00:09:58.417 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:09:58.417 + [[ -n '' ]] 00:09:58.417 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:09:58.417 + for M in /var/spdk/build-*-manifest.txt 00:09:58.417 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:09:58.417 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:58.676 + for M in /var/spdk/build-*-manifest.txt 00:09:58.676 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:09:58.676 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:58.676 ++ uname 00:09:58.676 + [[ Linux == \L\i\n\u\x ]] 00:09:58.676 + sudo dmesg -T 00:09:58.676 + sudo dmesg --clear 00:09:58.676 + dmesg_pid=5093 00:09:58.676 + [[ Fedora Linux == FreeBSD ]] 00:09:58.676 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:58.676 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:58.676 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:58.676 + sudo dmesg -Tw 00:09:58.676 + [[ -x /usr/src/fio-static/fio ]] 00:09:58.676 + export FIO_BIN=/usr/src/fio-static/fio 00:09:58.676 + FIO_BIN=/usr/src/fio-static/fio 00:09:58.676 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:09:58.676 + [[ ! -v VFIO_QEMU_BIN ]] 00:09:58.676 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:09:58.676 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:58.676 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:58.676 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:09:58.676 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:58.676 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:58.676 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:58.676 Test configuration: 00:09:58.676 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:58.676 SPDK_TEST_NVMF=1 00:09:58.676 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:58.676 SPDK_TEST_URING=1 00:09:58.676 SPDK_TEST_USDT=1 00:09:58.676 SPDK_RUN_ASAN=1 00:09:58.676 SPDK_RUN_UBSAN=1 00:09:58.676 NET_TYPE=virt 00:09:58.676 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:58.676 RUN_NIGHTLY=0 09:38:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.676 09:38:21 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:58.676 09:38:21 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.676 09:38:21 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.676 09:38:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.676 09:38:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.676 09:38:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.676 09:38:21 -- paths/export.sh@5 -- $ export PATH 00:09:58.676 09:38:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.676 09:38:21 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:09:58.676 09:38:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:09:58.676 09:38:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713433101.XXXXXX 00:09:58.676 09:38:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713433101.rPzjG1 00:09:58.676 09:38:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:09:58.676 09:38:21 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:09:58.676 09:38:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:09:58.676 09:38:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:09:58.676 09:38:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:09:58.676 09:38:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:09:58.676 09:38:21 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:09:58.676 09:38:21 -- common/autotest_common.sh@10 -- $ set +x 00:09:58.676 09:38:22 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-uring' 00:09:58.676 09:38:22 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:09:58.676 09:38:22 -- pm/common@17 -- $ local monitor 00:09:58.676 09:38:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.676 09:38:22 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5127 00:09:58.676 09:38:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.676 09:38:22 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5129 00:09:58.676 09:38:22 -- pm/common@26 -- $ sleep 1 00:09:58.676 09:38:22 -- pm/common@21 -- $ date +%s 00:09:58.676 09:38:22 -- pm/common@21 -- $ date +%s 00:09:58.676 09:38:22 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713433102 00:09:58.676 09:38:22 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713433102 00:09:58.935 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713433102_collect-vmstat.pm.log 00:09:58.935 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713433102_collect-cpu-load.pm.log 00:09:59.886 09:38:23 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:09:59.886 09:38:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:09:59.886 09:38:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:09:59.886 09:38:23 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:59.886 09:38:23 -- spdk/autobuild.sh@16 -- $ date -u 00:09:59.886 Thu Apr 18 09:38:23 AM UTC 2024 00:09:59.886 09:38:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:09:59.886 v24.05-pre-407-g65b4e17c6 00:09:59.886 09:38:23 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:09:59.886 09:38:23 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:09:59.886 09:38:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:09:59.886 09:38:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:09:59.886 09:38:23 -- common/autotest_common.sh@10 -- $ set +x 00:09:59.886 ************************************ 00:09:59.886 START TEST asan 00:09:59.886 ************************************ 00:09:59.886 09:38:23 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:09:59.886 using asan 00:09:59.886 ************************************ 00:09:59.886 END TEST asan 00:09:59.886 ************************************ 00:09:59.886 00:09:59.886 real 0m0.000s 00:09:59.886 user 0m0.000s 00:09:59.886 sys 0m0.000s 00:09:59.886 09:38:23 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:09:59.886 09:38:23 -- common/autotest_common.sh@10 -- $ set +x 00:09:59.886 09:38:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:09:59.886 09:38:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:09:59.886 09:38:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:09:59.886 09:38:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:09:59.886 09:38:23 -- common/autotest_common.sh@10 -- $ set +x 00:09:59.886 ************************************ 00:09:59.886 START TEST ubsan 00:09:59.886 ************************************ 00:09:59.886 using ubsan 00:09:59.886 09:38:23 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:09:59.886 00:09:59.886 real 0m0.000s 00:09:59.886 user 0m0.000s 00:09:59.886 sys 0m0.000s 00:09:59.886 09:38:23 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:09:59.886 ************************************ 00:09:59.886 END TEST ubsan 00:09:59.886 ************************************ 00:09:59.886 09:38:23 -- common/autotest_common.sh@10 -- $ set +x 00:10:00.151 09:38:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:10:00.151 09:38:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:00.151 09:38:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:00.151 09:38:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:00.151 09:38:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:00.151 09:38:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:10:00.151 09:38:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:10:00.151 09:38:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:10:00.151 09:38:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-uring --with-shared 00:10:00.151 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:00.151 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:00.410 Using 'verbs' RDMA provider 00:10:13.996 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:10:28.895 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:10:28.895 Creating mk/config.mk...done. 00:10:28.895 Creating mk/cc.flags.mk...done. 00:10:28.895 Type 'make' to build. 00:10:28.895 09:38:50 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:10:28.895 09:38:50 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:10:28.895 09:38:50 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:10:28.895 09:38:50 -- common/autotest_common.sh@10 -- $ set +x 00:10:28.895 ************************************ 00:10:28.895 START TEST make 00:10:28.895 ************************************ 00:10:28.895 09:38:50 -- common/autotest_common.sh@1111 -- $ make -j10 00:10:28.895 make[1]: Nothing to be done for 'all'. 00:10:41.117 The Meson build system 00:10:41.117 Version: 1.3.1 00:10:41.117 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:10:41.117 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:10:41.117 Build type: native build 00:10:41.117 Program cat found: YES (/usr/bin/cat) 00:10:41.117 Project name: DPDK 00:10:41.117 Project version: 23.11.0 00:10:41.117 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:10:41.117 C linker for the host machine: cc ld.bfd 2.39-16 00:10:41.117 Host machine cpu family: x86_64 00:10:41.117 Host machine cpu: x86_64 00:10:41.117 Message: ## Building in Developer Mode ## 00:10:41.117 Program pkg-config found: YES (/usr/bin/pkg-config) 00:10:41.117 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:10:41.117 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:10:41.117 Program python3 found: YES (/usr/bin/python3) 00:10:41.117 Program cat found: YES (/usr/bin/cat) 00:10:41.117 Compiler for C supports arguments -march=native: YES 00:10:41.117 Checking for size of "void *" : 8 00:10:41.117 Checking for size of "void *" : 8 (cached) 00:10:41.117 Library m found: YES 00:10:41.117 Library numa found: YES 00:10:41.117 Has header "numaif.h" : YES 00:10:41.117 Library fdt found: NO 00:10:41.117 Library execinfo found: NO 00:10:41.117 Has header "execinfo.h" : YES 00:10:41.117 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:10:41.117 Run-time dependency libarchive found: NO (tried pkgconfig) 00:10:41.117 Run-time dependency libbsd found: NO (tried pkgconfig) 00:10:41.117 Run-time dependency jansson found: NO (tried pkgconfig) 00:10:41.117 Run-time dependency openssl found: YES 3.0.9 00:10:41.117 Run-time dependency libpcap found: YES 1.10.4 00:10:41.117 Has header "pcap.h" with dependency libpcap: YES 00:10:41.117 Compiler for C supports arguments -Wcast-qual: YES 00:10:41.117 Compiler for C supports arguments -Wdeprecated: YES 00:10:41.117 Compiler for C supports arguments -Wformat: YES 00:10:41.117 Compiler for C supports arguments -Wformat-nonliteral: NO 00:10:41.117 Compiler for C supports arguments -Wformat-security: NO 00:10:41.117 Compiler for C supports arguments -Wmissing-declarations: YES 00:10:41.117 Compiler for C supports arguments -Wmissing-prototypes: YES 00:10:41.117 Compiler for C supports arguments -Wnested-externs: YES 00:10:41.117 Compiler for C supports arguments -Wold-style-definition: YES 00:10:41.117 Compiler for C supports arguments -Wpointer-arith: YES 00:10:41.117 Compiler for C supports arguments -Wsign-compare: YES 00:10:41.117 Compiler for C supports arguments -Wstrict-prototypes: YES 00:10:41.117 Compiler for C supports arguments -Wundef: YES 00:10:41.117 Compiler for C supports arguments -Wwrite-strings: YES 00:10:41.117 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:10:41.117 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:10:41.117 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:10:41.117 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:10:41.117 Program objdump found: YES (/usr/bin/objdump) 00:10:41.117 Compiler for C supports arguments -mavx512f: YES 00:10:41.117 Checking if "AVX512 checking" compiles: YES 00:10:41.117 Fetching value of define "__SSE4_2__" : 1 00:10:41.117 Fetching value of define "__AES__" : 1 00:10:41.117 Fetching value of define "__AVX__" : 1 00:10:41.117 Fetching value of define "__AVX2__" : 1 00:10:41.117 Fetching value of define "__AVX512BW__" : (undefined) 00:10:41.117 Fetching value of define "__AVX512CD__" : (undefined) 00:10:41.117 Fetching value of define "__AVX512DQ__" : (undefined) 00:10:41.117 Fetching value of define "__AVX512F__" : (undefined) 00:10:41.117 Fetching value of define "__AVX512VL__" : (undefined) 00:10:41.117 Fetching value of define "__PCLMUL__" : 1 00:10:41.117 Fetching value of define "__RDRND__" : 1 00:10:41.117 Fetching value of define "__RDSEED__" : 1 00:10:41.117 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:10:41.117 Fetching value of define "__znver1__" : (undefined) 00:10:41.117 Fetching value of define "__znver2__" : (undefined) 00:10:41.117 Fetching value of define "__znver3__" : (undefined) 00:10:41.117 Fetching value of define "__znver4__" : (undefined) 00:10:41.117 Library asan found: YES 00:10:41.117 Compiler for C supports arguments -Wno-format-truncation: YES 00:10:41.117 Message: lib/log: Defining dependency "log" 00:10:41.117 Message: lib/kvargs: Defining dependency "kvargs" 00:10:41.117 Message: lib/telemetry: Defining dependency "telemetry" 00:10:41.117 Library rt found: YES 00:10:41.117 Checking for function "getentropy" : NO 00:10:41.117 Message: lib/eal: Defining dependency "eal" 00:10:41.117 Message: lib/ring: Defining dependency "ring" 00:10:41.117 Message: lib/rcu: Defining dependency "rcu" 00:10:41.117 Message: lib/mempool: Defining dependency "mempool" 00:10:41.117 Message: lib/mbuf: Defining dependency "mbuf" 00:10:41.117 Fetching value of define "__PCLMUL__" : 1 (cached) 00:10:41.117 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:10:41.117 Compiler for C supports arguments -mpclmul: YES 00:10:41.117 Compiler for C supports arguments -maes: YES 00:10:41.117 Compiler for C supports arguments -mavx512f: YES (cached) 00:10:41.117 Compiler for C supports arguments -mavx512bw: YES 00:10:41.117 Compiler for C supports arguments -mavx512dq: YES 00:10:41.117 Compiler for C supports arguments -mavx512vl: YES 00:10:41.117 Compiler for C supports arguments -mvpclmulqdq: YES 00:10:41.117 Compiler for C supports arguments -mavx2: YES 00:10:41.117 Compiler for C supports arguments -mavx: YES 00:10:41.117 Message: lib/net: Defining dependency "net" 00:10:41.117 Message: lib/meter: Defining dependency "meter" 00:10:41.117 Message: lib/ethdev: Defining dependency "ethdev" 00:10:41.117 Message: lib/pci: Defining dependency "pci" 00:10:41.117 Message: lib/cmdline: Defining dependency "cmdline" 00:10:41.117 Message: lib/hash: Defining dependency "hash" 00:10:41.117 Message: lib/timer: Defining dependency "timer" 00:10:41.117 Message: lib/compressdev: Defining dependency "compressdev" 00:10:41.117 Message: lib/cryptodev: Defining dependency "cryptodev" 00:10:41.117 Message: lib/dmadev: Defining dependency "dmadev" 00:10:41.117 Compiler for C supports arguments -Wno-cast-qual: YES 00:10:41.117 Message: lib/power: Defining dependency "power" 00:10:41.117 Message: lib/reorder: Defining dependency "reorder" 00:10:41.117 Message: lib/security: Defining dependency "security" 00:10:41.117 Has header "linux/userfaultfd.h" : YES 00:10:41.117 Has header "linux/vduse.h" : YES 00:10:41.117 Message: lib/vhost: Defining dependency "vhost" 00:10:41.117 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:10:41.117 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:10:41.117 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:10:41.117 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:10:41.117 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:10:41.117 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:10:41.117 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:10:41.117 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:10:41.117 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:10:41.117 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:10:41.117 Program doxygen found: YES (/usr/bin/doxygen) 00:10:41.117 Configuring doxy-api-html.conf using configuration 00:10:41.117 Configuring doxy-api-man.conf using configuration 00:10:41.117 Program mandb found: YES (/usr/bin/mandb) 00:10:41.117 Program sphinx-build found: NO 00:10:41.117 Configuring rte_build_config.h using configuration 00:10:41.117 Message: 00:10:41.117 ================= 00:10:41.117 Applications Enabled 00:10:41.117 ================= 00:10:41.117 00:10:41.117 apps: 00:10:41.117 00:10:41.117 00:10:41.117 Message: 00:10:41.117 ================= 00:10:41.117 Libraries Enabled 00:10:41.117 ================= 00:10:41.117 00:10:41.117 libs: 00:10:41.117 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:10:41.117 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:10:41.117 cryptodev, dmadev, power, reorder, security, vhost, 00:10:41.117 00:10:41.117 Message: 00:10:41.117 =============== 00:10:41.117 Drivers Enabled 00:10:41.117 =============== 00:10:41.117 00:10:41.117 common: 00:10:41.117 00:10:41.117 bus: 00:10:41.117 pci, vdev, 00:10:41.117 mempool: 00:10:41.117 ring, 00:10:41.117 dma: 00:10:41.117 00:10:41.117 net: 00:10:41.117 00:10:41.117 crypto: 00:10:41.117 00:10:41.117 compress: 00:10:41.117 00:10:41.117 vdpa: 00:10:41.117 00:10:41.117 00:10:41.117 Message: 00:10:41.117 ================= 00:10:41.118 Content Skipped 00:10:41.118 ================= 00:10:41.118 00:10:41.118 apps: 00:10:41.118 dumpcap: explicitly disabled via build config 00:10:41.118 graph: explicitly disabled via build config 00:10:41.118 pdump: explicitly disabled via build config 00:10:41.118 proc-info: explicitly disabled via build config 00:10:41.118 test-acl: explicitly disabled via build config 00:10:41.118 test-bbdev: explicitly disabled via build config 00:10:41.118 test-cmdline: explicitly disabled via build config 00:10:41.118 test-compress-perf: explicitly disabled via build config 00:10:41.118 test-crypto-perf: explicitly disabled via build config 00:10:41.118 test-dma-perf: explicitly disabled via build config 00:10:41.118 test-eventdev: explicitly disabled via build config 00:10:41.118 test-fib: explicitly disabled via build config 00:10:41.118 test-flow-perf: explicitly disabled via build config 00:10:41.118 test-gpudev: explicitly disabled via build config 00:10:41.118 test-mldev: explicitly disabled via build config 00:10:41.118 test-pipeline: explicitly disabled via build config 00:10:41.118 test-pmd: explicitly disabled via build config 00:10:41.118 test-regex: explicitly disabled via build config 00:10:41.118 test-sad: explicitly disabled via build config 00:10:41.118 test-security-perf: explicitly disabled via build config 00:10:41.118 00:10:41.118 libs: 00:10:41.118 metrics: explicitly disabled via build config 00:10:41.118 acl: explicitly disabled via build config 00:10:41.118 bbdev: explicitly disabled via build config 00:10:41.118 bitratestats: explicitly disabled via build config 00:10:41.118 bpf: explicitly disabled via build config 00:10:41.118 cfgfile: explicitly disabled via build config 00:10:41.118 distributor: explicitly disabled via build config 00:10:41.118 efd: explicitly disabled via build config 00:10:41.118 eventdev: explicitly disabled via build config 00:10:41.118 dispatcher: explicitly disabled via build config 00:10:41.118 gpudev: explicitly disabled via build config 00:10:41.118 gro: explicitly disabled via build config 00:10:41.118 gso: explicitly disabled via build config 00:10:41.118 ip_frag: explicitly disabled via build config 00:10:41.118 jobstats: explicitly disabled via build config 00:10:41.118 latencystats: explicitly disabled via build config 00:10:41.118 lpm: explicitly disabled via build config 00:10:41.118 member: explicitly disabled via build config 00:10:41.118 pcapng: explicitly disabled via build config 00:10:41.118 rawdev: explicitly disabled via build config 00:10:41.118 regexdev: explicitly disabled via build config 00:10:41.118 mldev: explicitly disabled via build config 00:10:41.118 rib: explicitly disabled via build config 00:10:41.118 sched: explicitly disabled via build config 00:10:41.118 stack: explicitly disabled via build config 00:10:41.118 ipsec: explicitly disabled via build config 00:10:41.118 pdcp: explicitly disabled via build config 00:10:41.118 fib: explicitly disabled via build config 00:10:41.118 port: explicitly disabled via build config 00:10:41.118 pdump: explicitly disabled via build config 00:10:41.118 table: explicitly disabled via build config 00:10:41.118 pipeline: explicitly disabled via build config 00:10:41.118 graph: explicitly disabled via build config 00:10:41.118 node: explicitly disabled via build config 00:10:41.118 00:10:41.118 drivers: 00:10:41.118 common/cpt: not in enabled drivers build config 00:10:41.118 common/dpaax: not in enabled drivers build config 00:10:41.118 common/iavf: not in enabled drivers build config 00:10:41.118 common/idpf: not in enabled drivers build config 00:10:41.118 common/mvep: not in enabled drivers build config 00:10:41.118 common/octeontx: not in enabled drivers build config 00:10:41.118 bus/auxiliary: not in enabled drivers build config 00:10:41.118 bus/cdx: not in enabled drivers build config 00:10:41.118 bus/dpaa: not in enabled drivers build config 00:10:41.118 bus/fslmc: not in enabled drivers build config 00:10:41.118 bus/ifpga: not in enabled drivers build config 00:10:41.118 bus/platform: not in enabled drivers build config 00:10:41.118 bus/vmbus: not in enabled drivers build config 00:10:41.118 common/cnxk: not in enabled drivers build config 00:10:41.118 common/mlx5: not in enabled drivers build config 00:10:41.118 common/nfp: not in enabled drivers build config 00:10:41.118 common/qat: not in enabled drivers build config 00:10:41.118 common/sfc_efx: not in enabled drivers build config 00:10:41.118 mempool/bucket: not in enabled drivers build config 00:10:41.118 mempool/cnxk: not in enabled drivers build config 00:10:41.118 mempool/dpaa: not in enabled drivers build config 00:10:41.118 mempool/dpaa2: not in enabled drivers build config 00:10:41.118 mempool/octeontx: not in enabled drivers build config 00:10:41.118 mempool/stack: not in enabled drivers build config 00:10:41.118 dma/cnxk: not in enabled drivers build config 00:10:41.118 dma/dpaa: not in enabled drivers build config 00:10:41.118 dma/dpaa2: not in enabled drivers build config 00:10:41.118 dma/hisilicon: not in enabled drivers build config 00:10:41.118 dma/idxd: not in enabled drivers build config 00:10:41.118 dma/ioat: not in enabled drivers build config 00:10:41.118 dma/skeleton: not in enabled drivers build config 00:10:41.118 net/af_packet: not in enabled drivers build config 00:10:41.118 net/af_xdp: not in enabled drivers build config 00:10:41.118 net/ark: not in enabled drivers build config 00:10:41.118 net/atlantic: not in enabled drivers build config 00:10:41.118 net/avp: not in enabled drivers build config 00:10:41.118 net/axgbe: not in enabled drivers build config 00:10:41.118 net/bnx2x: not in enabled drivers build config 00:10:41.118 net/bnxt: not in enabled drivers build config 00:10:41.118 net/bonding: not in enabled drivers build config 00:10:41.118 net/cnxk: not in enabled drivers build config 00:10:41.118 net/cpfl: not in enabled drivers build config 00:10:41.118 net/cxgbe: not in enabled drivers build config 00:10:41.118 net/dpaa: not in enabled drivers build config 00:10:41.118 net/dpaa2: not in enabled drivers build config 00:10:41.118 net/e1000: not in enabled drivers build config 00:10:41.118 net/ena: not in enabled drivers build config 00:10:41.118 net/enetc: not in enabled drivers build config 00:10:41.118 net/enetfec: not in enabled drivers build config 00:10:41.118 net/enic: not in enabled drivers build config 00:10:41.118 net/failsafe: not in enabled drivers build config 00:10:41.118 net/fm10k: not in enabled drivers build config 00:10:41.118 net/gve: not in enabled drivers build config 00:10:41.118 net/hinic: not in enabled drivers build config 00:10:41.118 net/hns3: not in enabled drivers build config 00:10:41.118 net/i40e: not in enabled drivers build config 00:10:41.118 net/iavf: not in enabled drivers build config 00:10:41.118 net/ice: not in enabled drivers build config 00:10:41.118 net/idpf: not in enabled drivers build config 00:10:41.118 net/igc: not in enabled drivers build config 00:10:41.118 net/ionic: not in enabled drivers build config 00:10:41.118 net/ipn3ke: not in enabled drivers build config 00:10:41.118 net/ixgbe: not in enabled drivers build config 00:10:41.118 net/mana: not in enabled drivers build config 00:10:41.118 net/memif: not in enabled drivers build config 00:10:41.118 net/mlx4: not in enabled drivers build config 00:10:41.118 net/mlx5: not in enabled drivers build config 00:10:41.118 net/mvneta: not in enabled drivers build config 00:10:41.118 net/mvpp2: not in enabled drivers build config 00:10:41.118 net/netvsc: not in enabled drivers build config 00:10:41.118 net/nfb: not in enabled drivers build config 00:10:41.118 net/nfp: not in enabled drivers build config 00:10:41.118 net/ngbe: not in enabled drivers build config 00:10:41.118 net/null: not in enabled drivers build config 00:10:41.118 net/octeontx: not in enabled drivers build config 00:10:41.118 net/octeon_ep: not in enabled drivers build config 00:10:41.118 net/pcap: not in enabled drivers build config 00:10:41.118 net/pfe: not in enabled drivers build config 00:10:41.118 net/qede: not in enabled drivers build config 00:10:41.118 net/ring: not in enabled drivers build config 00:10:41.118 net/sfc: not in enabled drivers build config 00:10:41.118 net/softnic: not in enabled drivers build config 00:10:41.118 net/tap: not in enabled drivers build config 00:10:41.118 net/thunderx: not in enabled drivers build config 00:10:41.118 net/txgbe: not in enabled drivers build config 00:10:41.118 net/vdev_netvsc: not in enabled drivers build config 00:10:41.118 net/vhost: not in enabled drivers build config 00:10:41.118 net/virtio: not in enabled drivers build config 00:10:41.118 net/vmxnet3: not in enabled drivers build config 00:10:41.118 raw/*: missing internal dependency, "rawdev" 00:10:41.118 crypto/armv8: not in enabled drivers build config 00:10:41.118 crypto/bcmfs: not in enabled drivers build config 00:10:41.118 crypto/caam_jr: not in enabled drivers build config 00:10:41.118 crypto/ccp: not in enabled drivers build config 00:10:41.118 crypto/cnxk: not in enabled drivers build config 00:10:41.118 crypto/dpaa_sec: not in enabled drivers build config 00:10:41.118 crypto/dpaa2_sec: not in enabled drivers build config 00:10:41.118 crypto/ipsec_mb: not in enabled drivers build config 00:10:41.118 crypto/mlx5: not in enabled drivers build config 00:10:41.118 crypto/mvsam: not in enabled drivers build config 00:10:41.118 crypto/nitrox: not in enabled drivers build config 00:10:41.118 crypto/null: not in enabled drivers build config 00:10:41.118 crypto/octeontx: not in enabled drivers build config 00:10:41.118 crypto/openssl: not in enabled drivers build config 00:10:41.118 crypto/scheduler: not in enabled drivers build config 00:10:41.118 crypto/uadk: not in enabled drivers build config 00:10:41.118 crypto/virtio: not in enabled drivers build config 00:10:41.118 compress/isal: not in enabled drivers build config 00:10:41.118 compress/mlx5: not in enabled drivers build config 00:10:41.118 compress/octeontx: not in enabled drivers build config 00:10:41.118 compress/zlib: not in enabled drivers build config 00:10:41.118 regex/*: missing internal dependency, "regexdev" 00:10:41.118 ml/*: missing internal dependency, "mldev" 00:10:41.118 vdpa/ifc: not in enabled drivers build config 00:10:41.118 vdpa/mlx5: not in enabled drivers build config 00:10:41.118 vdpa/nfp: not in enabled drivers build config 00:10:41.118 vdpa/sfc: not in enabled drivers build config 00:10:41.118 event/*: missing internal dependency, "eventdev" 00:10:41.118 baseband/*: missing internal dependency, "bbdev" 00:10:41.118 gpu/*: missing internal dependency, "gpudev" 00:10:41.118 00:10:41.118 00:10:41.391 Build targets in project: 85 00:10:41.391 00:10:41.391 DPDK 23.11.0 00:10:41.391 00:10:41.391 User defined options 00:10:41.391 buildtype : debug 00:10:41.391 default_library : shared 00:10:41.391 libdir : lib 00:10:41.391 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:41.391 b_sanitize : address 00:10:41.391 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:10:41.391 c_link_args : 00:10:41.391 cpu_instruction_set: native 00:10:41.391 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:10:41.391 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:10:41.391 enable_docs : false 00:10:41.391 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:10:41.391 enable_kmods : false 00:10:41.391 tests : false 00:10:41.391 00:10:41.391 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:41.958 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:10:42.218 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:10:42.218 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:10:42.218 [3/265] Linking static target lib/librte_kvargs.a 00:10:42.218 [4/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:10:42.218 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:10:42.218 [6/265] Linking static target lib/librte_log.a 00:10:42.218 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:10:42.218 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:10:42.477 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:10:42.477 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:10:42.736 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:10:42.995 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:10:43.254 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:10:43.254 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:10:43.254 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:10:43.254 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:10:43.254 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:10:43.514 [18/265] Linking target lib/librte_log.so.24.0 00:10:43.514 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:10:43.774 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:10:43.774 [21/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:10:43.774 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:10:43.774 [23/265] Linking target lib/librte_kvargs.so.24.0 00:10:44.033 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:10:44.033 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:10:44.033 [26/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:10:44.033 [27/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:10:44.033 [28/265] Linking static target lib/librte_telemetry.a 00:10:44.294 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:10:44.294 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:10:44.551 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:10:44.551 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:10:44.808 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:10:44.808 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:10:44.808 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:10:45.090 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:10:45.090 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:10:45.090 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:10:45.090 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:10:45.090 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:10:45.090 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:10:45.090 [42/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:10:45.348 [43/265] Linking target lib/librte_telemetry.so.24.0 00:10:45.348 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:10:45.916 [45/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:10:45.916 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:10:45.916 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:10:45.916 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:10:46.175 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:10:46.175 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:10:46.175 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:10:46.433 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:10:46.433 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:10:46.433 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:10:46.433 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:10:46.433 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:10:46.692 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:10:46.692 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:10:46.692 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:10:46.950 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:10:46.950 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:10:46.950 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:10:47.209 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:10:47.209 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:10:47.209 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:10:47.209 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:10:47.209 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:10:47.469 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:10:47.469 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:10:47.469 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:10:47.728 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:10:47.728 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:10:47.728 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:10:47.728 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:10:47.728 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:10:47.728 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:10:47.987 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:10:47.987 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:10:47.987 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:10:48.246 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:10:48.246 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:10:48.246 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:10:48.524 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:10:48.786 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:10:48.786 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:10:48.786 [86/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:10:48.786 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:10:48.786 [88/265] Linking static target lib/librte_ring.a 00:10:48.786 [89/265] Linking static target lib/librte_rcu.a 00:10:48.786 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:10:49.045 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:10:49.045 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:10:49.045 [93/265] Linking static target lib/librte_mempool.a 00:10:49.045 [94/265] Linking static target lib/librte_eal.a 00:10:49.045 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:10:49.304 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:10:49.304 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:10:49.304 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:10:49.563 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:10:49.563 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:10:49.563 [101/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:10:49.563 [102/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:10:49.563 [103/265] Linking static target lib/librte_mbuf.a 00:10:49.821 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:10:49.821 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:10:50.079 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:10:50.079 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:10:50.337 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:10:50.337 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:10:50.337 [110/265] Linking static target lib/librte_meter.a 00:10:50.337 [111/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:10:50.337 [112/265] Linking static target lib/librte_net.a 00:10:50.596 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:10:50.596 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:10:50.853 [115/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:50.853 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:10:50.853 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:10:51.112 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:10:51.112 [119/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:10:51.371 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:10:51.650 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:10:51.650 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:10:52.221 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:10:52.221 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:10:52.221 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:10:52.221 [126/265] Linking static target lib/librte_pci.a 00:10:52.221 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:10:52.221 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:10:52.479 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:10:52.479 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:10:52.479 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:10:52.479 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:10:52.737 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:10:52.737 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:10:52.737 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:10:52.737 [136/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:52.737 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:10:52.737 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:10:52.737 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:10:52.737 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:10:52.737 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:10:52.995 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:10:52.995 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:10:53.253 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:10:53.253 [145/265] Linking static target lib/librte_cmdline.a 00:10:53.510 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:10:53.511 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:10:53.511 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:10:53.768 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:10:53.768 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:10:53.768 [151/265] Linking static target lib/librte_ethdev.a 00:10:53.768 [152/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:10:53.768 [153/265] Linking static target lib/librte_timer.a 00:10:54.027 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:10:54.027 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:10:54.285 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:10:54.285 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:10:54.543 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:10:54.543 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:10:54.543 [160/265] Linking static target lib/librte_compressdev.a 00:10:54.543 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:10:54.801 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:10:54.801 [163/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:10:54.801 [164/265] Linking static target lib/librte_hash.a 00:10:54.801 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:10:54.801 [166/265] Linking static target lib/librte_dmadev.a 00:10:54.801 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:10:55.059 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:10:55.059 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:10:55.317 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:10:55.317 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:55.317 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:10:55.317 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:55.584 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:10:55.584 [175/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:10:55.584 [176/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:10:55.584 [177/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:10:55.584 [178/265] Linking static target lib/librte_cryptodev.a 00:10:55.842 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:10:55.842 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:10:55.842 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:10:56.101 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:10:56.101 [183/265] Linking static target lib/librte_power.a 00:10:56.359 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:10:56.359 [185/265] Linking static target lib/librte_reorder.a 00:10:56.359 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:10:56.617 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:10:56.617 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:10:56.617 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:10:56.617 [190/265] Linking static target lib/librte_security.a 00:10:56.876 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.135 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:10:57.135 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.393 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.393 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:10:57.393 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:10:57.653 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:10:57.653 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:10:57.653 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:10:57.912 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.912 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:10:57.912 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:10:57.912 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:10:58.481 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:10:58.481 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:10:58.481 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:10:58.481 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:10:58.481 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:10:58.739 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:10:58.739 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:58.739 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:58.739 [212/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:10:58.739 [213/265] Linking static target drivers/librte_bus_vdev.a 00:10:58.739 [214/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:10:58.739 [215/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:10:58.739 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:58.739 [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:58.739 [218/265] Linking static target drivers/librte_bus_pci.a 00:10:58.999 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:10:58.999 [220/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:58.999 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:58.999 [222/265] Linking static target drivers/librte_mempool_ring.a 00:10:58.999 [223/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:59.258 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:59.827 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:00.395 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:00.396 [227/265] Linking target lib/librte_eal.so.24.0 00:11:00.655 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:11:00.655 [229/265] Linking target lib/librte_ring.so.24.0 00:11:00.655 [230/265] Linking target lib/librte_dmadev.so.24.0 00:11:00.655 [231/265] Linking target lib/librte_timer.so.24.0 00:11:00.655 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:11:00.655 [233/265] Linking target lib/librte_pci.so.24.0 00:11:00.655 [234/265] Linking target lib/librte_meter.so.24.0 00:11:00.655 [235/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:11:00.655 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:11:00.655 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:11:00.655 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:11:00.915 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:11:00.915 [240/265] Linking target lib/librte_rcu.so.24.0 00:11:00.915 [241/265] Linking target lib/librte_mempool.so.24.0 00:11:00.915 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:11:00.915 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:11:00.915 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:11:00.915 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:11:00.915 [246/265] Linking target lib/librte_mbuf.so.24.0 00:11:01.175 [247/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:01.175 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:11:01.175 [249/265] Linking target lib/librte_net.so.24.0 00:11:01.175 [250/265] Linking target lib/librte_reorder.so.24.0 00:11:01.175 [251/265] Linking target lib/librte_compressdev.so.24.0 00:11:01.175 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:11:01.434 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:11:01.434 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:11:01.434 [255/265] Linking target lib/librte_cmdline.so.24.0 00:11:01.434 [256/265] Linking target lib/librte_hash.so.24.0 00:11:01.434 [257/265] Linking target lib/librte_security.so.24.0 00:11:01.434 [258/265] Linking target lib/librte_ethdev.so.24.0 00:11:01.694 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:11:01.694 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:11:01.694 [261/265] Linking target lib/librte_power.so.24.0 00:11:04.985 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:04.985 [263/265] Linking static target lib/librte_vhost.a 00:11:06.435 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:11:06.435 [265/265] Linking target lib/librte_vhost.so.24.0 00:11:06.435 INFO: autodetecting backend as ninja 00:11:06.435 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:11:07.814 CC lib/ut/ut.o 00:11:07.814 CC lib/ut_mock/mock.o 00:11:07.814 CC lib/log/log.o 00:11:07.814 CC lib/log/log_flags.o 00:11:07.814 CC lib/log/log_deprecated.o 00:11:07.814 LIB libspdk_ut_mock.a 00:11:07.814 SO libspdk_ut_mock.so.6.0 00:11:08.073 LIB libspdk_ut.a 00:11:08.073 LIB libspdk_log.a 00:11:08.073 SO libspdk_ut.so.2.0 00:11:08.073 SO libspdk_log.so.7.0 00:11:08.073 SYMLINK libspdk_ut_mock.so 00:11:08.073 SYMLINK libspdk_ut.so 00:11:08.073 SYMLINK libspdk_log.so 00:11:08.332 CC lib/dma/dma.o 00:11:08.332 CC lib/util/base64.o 00:11:08.332 CC lib/util/bit_array.o 00:11:08.332 CC lib/util/crc16.o 00:11:08.332 CC lib/util/cpuset.o 00:11:08.332 CC lib/util/crc32c.o 00:11:08.332 CC lib/util/crc32.o 00:11:08.332 CC lib/ioat/ioat.o 00:11:08.332 CXX lib/trace_parser/trace.o 00:11:08.590 CC lib/vfio_user/host/vfio_user_pci.o 00:11:08.590 CC lib/util/crc32_ieee.o 00:11:08.590 CC lib/vfio_user/host/vfio_user.o 00:11:08.590 CC lib/util/crc64.o 00:11:08.590 LIB libspdk_dma.a 00:11:08.590 CC lib/util/dif.o 00:11:08.590 CC lib/util/fd.o 00:11:08.590 CC lib/util/file.o 00:11:08.590 SO libspdk_dma.so.4.0 00:11:08.849 CC lib/util/hexlify.o 00:11:08.849 CC lib/util/iov.o 00:11:08.849 LIB libspdk_ioat.a 00:11:08.849 SYMLINK libspdk_dma.so 00:11:08.849 CC lib/util/math.o 00:11:08.849 SO libspdk_ioat.so.7.0 00:11:08.849 CC lib/util/pipe.o 00:11:08.849 CC lib/util/strerror_tls.o 00:11:08.849 LIB libspdk_vfio_user.a 00:11:08.849 SYMLINK libspdk_ioat.so 00:11:08.849 CC lib/util/string.o 00:11:08.849 CC lib/util/uuid.o 00:11:08.849 CC lib/util/fd_group.o 00:11:08.849 SO libspdk_vfio_user.so.5.0 00:11:08.849 CC lib/util/xor.o 00:11:08.849 CC lib/util/zipf.o 00:11:09.107 SYMLINK libspdk_vfio_user.so 00:11:09.365 LIB libspdk_util.a 00:11:09.623 SO libspdk_util.so.9.0 00:11:09.623 LIB libspdk_trace_parser.a 00:11:09.881 SO libspdk_trace_parser.so.5.0 00:11:09.881 SYMLINK libspdk_util.so 00:11:09.881 SYMLINK libspdk_trace_parser.so 00:11:10.140 CC lib/env_dpdk/env.o 00:11:10.140 CC lib/env_dpdk/memory.o 00:11:10.140 CC lib/env_dpdk/pci.o 00:11:10.140 CC lib/env_dpdk/threads.o 00:11:10.140 CC lib/env_dpdk/init.o 00:11:10.140 CC lib/vmd/vmd.o 00:11:10.140 CC lib/conf/conf.o 00:11:10.140 CC lib/json/json_parse.o 00:11:10.140 CC lib/rdma/common.o 00:11:10.140 CC lib/idxd/idxd.o 00:11:10.140 CC lib/idxd/idxd_user.o 00:11:10.398 LIB libspdk_conf.a 00:11:10.398 CC lib/json/json_util.o 00:11:10.398 SO libspdk_conf.so.6.0 00:11:10.398 CC lib/rdma/rdma_verbs.o 00:11:10.398 SYMLINK libspdk_conf.so 00:11:10.398 CC lib/vmd/led.o 00:11:10.398 CC lib/env_dpdk/pci_ioat.o 00:11:10.656 CC lib/json/json_write.o 00:11:10.656 CC lib/env_dpdk/pci_virtio.o 00:11:10.656 CC lib/env_dpdk/pci_vmd.o 00:11:10.656 CC lib/env_dpdk/pci_idxd.o 00:11:10.656 LIB libspdk_rdma.a 00:11:10.656 CC lib/env_dpdk/pci_event.o 00:11:10.656 SO libspdk_rdma.so.6.0 00:11:10.656 CC lib/env_dpdk/sigbus_handler.o 00:11:10.656 CC lib/env_dpdk/pci_dpdk.o 00:11:10.656 CC lib/env_dpdk/pci_dpdk_2207.o 00:11:10.656 SYMLINK libspdk_rdma.so 00:11:10.656 CC lib/env_dpdk/pci_dpdk_2211.o 00:11:10.915 LIB libspdk_idxd.a 00:11:10.915 SO libspdk_idxd.so.12.0 00:11:10.915 LIB libspdk_json.a 00:11:10.915 LIB libspdk_vmd.a 00:11:10.915 SYMLINK libspdk_idxd.so 00:11:10.915 SO libspdk_json.so.6.0 00:11:10.915 SO libspdk_vmd.so.6.0 00:11:10.915 SYMLINK libspdk_json.so 00:11:11.173 SYMLINK libspdk_vmd.so 00:11:11.173 CC lib/jsonrpc/jsonrpc_server.o 00:11:11.173 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:11:11.173 CC lib/jsonrpc/jsonrpc_client.o 00:11:11.173 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:11:11.741 LIB libspdk_jsonrpc.a 00:11:11.741 SO libspdk_jsonrpc.so.6.0 00:11:11.741 SYMLINK libspdk_jsonrpc.so 00:11:12.000 CC lib/rpc/rpc.o 00:11:12.258 LIB libspdk_env_dpdk.a 00:11:12.258 LIB libspdk_rpc.a 00:11:12.258 SO libspdk_rpc.so.6.0 00:11:12.527 SO libspdk_env_dpdk.so.14.0 00:11:12.527 SYMLINK libspdk_rpc.so 00:11:12.527 SYMLINK libspdk_env_dpdk.so 00:11:12.796 CC lib/keyring/keyring_rpc.o 00:11:12.796 CC lib/keyring/keyring.o 00:11:12.796 CC lib/notify/notify.o 00:11:12.796 CC lib/notify/notify_rpc.o 00:11:12.796 CC lib/trace/trace.o 00:11:12.796 CC lib/trace/trace_flags.o 00:11:12.796 CC lib/trace/trace_rpc.o 00:11:12.796 LIB libspdk_notify.a 00:11:13.054 SO libspdk_notify.so.6.0 00:11:13.054 LIB libspdk_keyring.a 00:11:13.054 LIB libspdk_trace.a 00:11:13.054 SO libspdk_keyring.so.1.0 00:11:13.054 SO libspdk_trace.so.10.0 00:11:13.054 SYMLINK libspdk_notify.so 00:11:13.054 SYMLINK libspdk_keyring.so 00:11:13.054 SYMLINK libspdk_trace.so 00:11:13.313 CC lib/thread/thread.o 00:11:13.313 CC lib/thread/iobuf.o 00:11:13.313 CC lib/sock/sock.o 00:11:13.313 CC lib/sock/sock_rpc.o 00:11:14.248 LIB libspdk_sock.a 00:11:14.248 SO libspdk_sock.so.9.0 00:11:14.248 SYMLINK libspdk_sock.so 00:11:14.506 CC lib/nvme/nvme_ctrlr_cmd.o 00:11:14.506 CC lib/nvme/nvme_ctrlr.o 00:11:14.506 CC lib/nvme/nvme_fabric.o 00:11:14.506 CC lib/nvme/nvme_ns_cmd.o 00:11:14.506 CC lib/nvme/nvme_ns.o 00:11:14.506 CC lib/nvme/nvme_pcie_common.o 00:11:14.506 CC lib/nvme/nvme_pcie.o 00:11:14.506 CC lib/nvme/nvme.o 00:11:14.506 CC lib/nvme/nvme_qpair.o 00:11:15.442 CC lib/nvme/nvme_quirks.o 00:11:15.442 CC lib/nvme/nvme_transport.o 00:11:15.442 CC lib/nvme/nvme_discovery.o 00:11:15.442 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:11:15.442 LIB libspdk_thread.a 00:11:15.442 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:11:15.701 SO libspdk_thread.so.10.0 00:11:15.701 CC lib/nvme/nvme_tcp.o 00:11:15.701 CC lib/nvme/nvme_opal.o 00:11:15.701 SYMLINK libspdk_thread.so 00:11:15.701 CC lib/nvme/nvme_io_msg.o 00:11:15.960 CC lib/nvme/nvme_poll_group.o 00:11:15.960 CC lib/nvme/nvme_zns.o 00:11:16.219 CC lib/nvme/nvme_stubs.o 00:11:16.219 CC lib/nvme/nvme_auth.o 00:11:16.219 CC lib/nvme/nvme_cuse.o 00:11:16.489 CC lib/nvme/nvme_rdma.o 00:11:16.489 CC lib/accel/accel.o 00:11:16.489 CC lib/blob/blobstore.o 00:11:16.759 CC lib/blob/request.o 00:11:16.759 CC lib/init/json_config.o 00:11:17.018 CC lib/virtio/virtio.o 00:11:17.018 CC lib/init/subsystem.o 00:11:17.277 CC lib/init/subsystem_rpc.o 00:11:17.277 CC lib/init/rpc.o 00:11:17.277 CC lib/accel/accel_rpc.o 00:11:17.277 CC lib/virtio/virtio_vhost_user.o 00:11:17.277 CC lib/accel/accel_sw.o 00:11:17.277 CC lib/virtio/virtio_vfio_user.o 00:11:17.277 CC lib/virtio/virtio_pci.o 00:11:17.277 LIB libspdk_init.a 00:11:17.536 SO libspdk_init.so.5.0 00:11:17.536 CC lib/blob/zeroes.o 00:11:17.536 SYMLINK libspdk_init.so 00:11:17.536 CC lib/blob/blob_bs_dev.o 00:11:17.799 CC lib/event/app.o 00:11:17.799 CC lib/event/reactor.o 00:11:17.799 CC lib/event/log_rpc.o 00:11:17.799 CC lib/event/app_rpc.o 00:11:17.799 CC lib/event/scheduler_static.o 00:11:17.799 LIB libspdk_virtio.a 00:11:17.799 SO libspdk_virtio.so.7.0 00:11:17.799 SYMLINK libspdk_virtio.so 00:11:18.058 LIB libspdk_accel.a 00:11:18.058 SO libspdk_accel.so.15.0 00:11:18.058 LIB libspdk_nvme.a 00:11:18.058 SYMLINK libspdk_accel.so 00:11:18.317 LIB libspdk_event.a 00:11:18.317 SO libspdk_nvme.so.13.0 00:11:18.317 SO libspdk_event.so.13.0 00:11:18.317 CC lib/bdev/bdev.o 00:11:18.317 CC lib/bdev/bdev_rpc.o 00:11:18.317 CC lib/bdev/bdev_zone.o 00:11:18.317 CC lib/bdev/part.o 00:11:18.317 CC lib/bdev/scsi_nvme.o 00:11:18.576 SYMLINK libspdk_event.so 00:11:18.834 SYMLINK libspdk_nvme.so 00:11:20.740 LIB libspdk_blob.a 00:11:21.001 SO libspdk_blob.so.11.0 00:11:21.001 SYMLINK libspdk_blob.so 00:11:21.260 CC lib/lvol/lvol.o 00:11:21.260 CC lib/blobfs/blobfs.o 00:11:21.260 CC lib/blobfs/tree.o 00:11:22.198 LIB libspdk_bdev.a 00:11:22.198 SO libspdk_bdev.so.15.0 00:11:22.457 SYMLINK libspdk_bdev.so 00:11:22.457 LIB libspdk_blobfs.a 00:11:22.457 CC lib/scsi/dev.o 00:11:22.457 CC lib/scsi/lun.o 00:11:22.457 CC lib/scsi/port.o 00:11:22.457 CC lib/scsi/scsi.o 00:11:22.457 CC lib/nvmf/ctrlr.o 00:11:22.457 CC lib/nbd/nbd.o 00:11:22.457 SO libspdk_blobfs.so.10.0 00:11:22.457 LIB libspdk_lvol.a 00:11:22.457 CC lib/ftl/ftl_core.o 00:11:22.457 CC lib/ublk/ublk.o 00:11:22.717 SO libspdk_lvol.so.10.0 00:11:22.717 SYMLINK libspdk_blobfs.so 00:11:22.718 CC lib/nbd/nbd_rpc.o 00:11:22.718 SYMLINK libspdk_lvol.so 00:11:22.718 CC lib/ublk/ublk_rpc.o 00:11:22.718 CC lib/scsi/scsi_bdev.o 00:11:22.718 CC lib/ftl/ftl_init.o 00:11:22.978 CC lib/scsi/scsi_pr.o 00:11:22.978 CC lib/scsi/scsi_rpc.o 00:11:22.978 CC lib/nvmf/ctrlr_discovery.o 00:11:22.978 CC lib/scsi/task.o 00:11:22.978 CC lib/nvmf/ctrlr_bdev.o 00:11:22.978 LIB libspdk_nbd.a 00:11:22.978 CC lib/nvmf/subsystem.o 00:11:22.978 CC lib/ftl/ftl_layout.o 00:11:22.978 SO libspdk_nbd.so.7.0 00:11:23.237 SYMLINK libspdk_nbd.so 00:11:23.238 CC lib/nvmf/nvmf.o 00:11:23.238 CC lib/nvmf/nvmf_rpc.o 00:11:23.238 CC lib/nvmf/transport.o 00:11:23.496 LIB libspdk_scsi.a 00:11:23.496 LIB libspdk_ublk.a 00:11:23.496 CC lib/nvmf/tcp.o 00:11:23.496 SO libspdk_scsi.so.9.0 00:11:23.496 SO libspdk_ublk.so.3.0 00:11:23.496 CC lib/ftl/ftl_debug.o 00:11:23.496 SYMLINK libspdk_ublk.so 00:11:23.496 CC lib/nvmf/rdma.o 00:11:23.496 SYMLINK libspdk_scsi.so 00:11:23.496 CC lib/ftl/ftl_io.o 00:11:23.755 CC lib/ftl/ftl_sb.o 00:11:24.014 CC lib/ftl/ftl_l2p.o 00:11:24.014 CC lib/iscsi/conn.o 00:11:24.273 CC lib/ftl/ftl_l2p_flat.o 00:11:24.273 CC lib/vhost/vhost.o 00:11:24.273 CC lib/vhost/vhost_rpc.o 00:11:24.273 CC lib/vhost/vhost_scsi.o 00:11:24.273 CC lib/ftl/ftl_nv_cache.o 00:11:24.532 CC lib/vhost/vhost_blk.o 00:11:24.532 CC lib/iscsi/init_grp.o 00:11:24.791 CC lib/iscsi/iscsi.o 00:11:24.791 CC lib/iscsi/md5.o 00:11:24.791 CC lib/iscsi/param.o 00:11:25.051 CC lib/iscsi/portal_grp.o 00:11:25.051 CC lib/iscsi/tgt_node.o 00:11:25.051 CC lib/iscsi/iscsi_subsystem.o 00:11:25.310 CC lib/ftl/ftl_band.o 00:11:25.310 CC lib/ftl/ftl_band_ops.o 00:11:25.569 CC lib/vhost/rte_vhost_user.o 00:11:25.569 CC lib/iscsi/iscsi_rpc.o 00:11:25.569 CC lib/iscsi/task.o 00:11:25.569 CC lib/ftl/ftl_writer.o 00:11:25.569 CC lib/ftl/ftl_rq.o 00:11:25.569 CC lib/ftl/ftl_reloc.o 00:11:25.569 CC lib/ftl/ftl_l2p_cache.o 00:11:25.828 CC lib/ftl/ftl_p2l.o 00:11:25.828 CC lib/ftl/mngt/ftl_mngt.o 00:11:25.828 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:25.828 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:25.828 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:26.087 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:26.087 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:26.087 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:26.087 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:26.087 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:26.087 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:26.346 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:26.346 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:26.346 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:26.346 CC lib/ftl/utils/ftl_conf.o 00:11:26.346 LIB libspdk_nvmf.a 00:11:26.346 CC lib/ftl/utils/ftl_md.o 00:11:26.346 CC lib/ftl/utils/ftl_mempool.o 00:11:26.605 SO libspdk_nvmf.so.18.0 00:11:26.605 CC lib/ftl/utils/ftl_bitmap.o 00:11:26.605 LIB libspdk_iscsi.a 00:11:26.605 CC lib/ftl/utils/ftl_property.o 00:11:26.605 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:26.605 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:26.605 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:26.605 SO libspdk_iscsi.so.8.0 00:11:26.605 LIB libspdk_vhost.a 00:11:26.605 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:26.863 SYMLINK libspdk_nvmf.so 00:11:26.863 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:26.863 SO libspdk_vhost.so.8.0 00:11:26.863 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:26.863 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:26.863 SYMLINK libspdk_iscsi.so 00:11:26.863 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:26.863 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:26.863 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:26.863 CC lib/ftl/base/ftl_base_dev.o 00:11:26.863 CC lib/ftl/base/ftl_base_bdev.o 00:11:26.863 SYMLINK libspdk_vhost.so 00:11:26.863 CC lib/ftl/ftl_trace.o 00:11:27.123 LIB libspdk_ftl.a 00:11:27.382 SO libspdk_ftl.so.9.0 00:11:27.951 SYMLINK libspdk_ftl.so 00:11:28.210 CC module/env_dpdk/env_dpdk_rpc.o 00:11:28.470 CC module/blob/bdev/blob_bdev.o 00:11:28.470 CC module/sock/posix/posix.o 00:11:28.470 CC module/keyring/file/keyring.o 00:11:28.470 CC module/sock/uring/uring.o 00:11:28.470 CC module/accel/dsa/accel_dsa.o 00:11:28.470 CC module/accel/ioat/accel_ioat.o 00:11:28.470 CC module/accel/error/accel_error.o 00:11:28.470 CC module/scheduler/dynamic/scheduler_dynamic.o 00:11:28.470 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:11:28.470 LIB libspdk_env_dpdk_rpc.a 00:11:28.470 SO libspdk_env_dpdk_rpc.so.6.0 00:11:28.470 SYMLINK libspdk_env_dpdk_rpc.so 00:11:28.470 CC module/accel/ioat/accel_ioat_rpc.o 00:11:28.470 CC module/keyring/file/keyring_rpc.o 00:11:28.470 LIB libspdk_scheduler_dpdk_governor.a 00:11:28.470 CC module/accel/error/accel_error_rpc.o 00:11:28.729 SO libspdk_scheduler_dpdk_governor.so.4.0 00:11:28.729 LIB libspdk_scheduler_dynamic.a 00:11:28.729 CC module/accel/dsa/accel_dsa_rpc.o 00:11:28.729 LIB libspdk_blob_bdev.a 00:11:28.729 LIB libspdk_accel_ioat.a 00:11:28.729 SO libspdk_scheduler_dynamic.so.4.0 00:11:28.729 LIB libspdk_keyring_file.a 00:11:28.729 SYMLINK libspdk_scheduler_dpdk_governor.so 00:11:28.729 SO libspdk_blob_bdev.so.11.0 00:11:28.729 SO libspdk_accel_ioat.so.6.0 00:11:28.729 SO libspdk_keyring_file.so.1.0 00:11:28.729 SYMLINK libspdk_scheduler_dynamic.so 00:11:28.729 LIB libspdk_accel_error.a 00:11:28.729 SYMLINK libspdk_blob_bdev.so 00:11:28.729 CC module/scheduler/gscheduler/gscheduler.o 00:11:28.729 SYMLINK libspdk_accel_ioat.so 00:11:28.729 SYMLINK libspdk_keyring_file.so 00:11:28.729 SO libspdk_accel_error.so.2.0 00:11:28.729 LIB libspdk_accel_dsa.a 00:11:28.729 SO libspdk_accel_dsa.so.5.0 00:11:28.729 SYMLINK libspdk_accel_error.so 00:11:28.988 CC module/accel/iaa/accel_iaa.o 00:11:28.988 SYMLINK libspdk_accel_dsa.so 00:11:28.988 LIB libspdk_scheduler_gscheduler.a 00:11:28.988 SO libspdk_scheduler_gscheduler.so.4.0 00:11:28.988 CC module/bdev/delay/vbdev_delay.o 00:11:28.988 CC module/bdev/gpt/gpt.o 00:11:28.988 CC module/bdev/error/vbdev_error.o 00:11:28.988 CC module/bdev/lvol/vbdev_lvol.o 00:11:28.988 CC module/blobfs/bdev/blobfs_bdev.o 00:11:28.988 SYMLINK libspdk_scheduler_gscheduler.so 00:11:28.988 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:11:29.247 CC module/bdev/malloc/bdev_malloc.o 00:11:29.247 CC module/accel/iaa/accel_iaa_rpc.o 00:11:29.247 CC module/bdev/malloc/bdev_malloc_rpc.o 00:11:29.247 LIB libspdk_blobfs_bdev.a 00:11:29.247 CC module/bdev/gpt/vbdev_gpt.o 00:11:29.247 SO libspdk_blobfs_bdev.so.6.0 00:11:29.247 LIB libspdk_sock_uring.a 00:11:29.247 LIB libspdk_accel_iaa.a 00:11:29.247 LIB libspdk_sock_posix.a 00:11:29.247 SO libspdk_sock_uring.so.5.0 00:11:29.247 SO libspdk_accel_iaa.so.3.0 00:11:29.506 CC module/bdev/error/vbdev_error_rpc.o 00:11:29.506 SO libspdk_sock_posix.so.6.0 00:11:29.506 SYMLINK libspdk_blobfs_bdev.so 00:11:29.506 CC module/bdev/delay/vbdev_delay_rpc.o 00:11:29.506 SYMLINK libspdk_sock_uring.so 00:11:29.506 SYMLINK libspdk_accel_iaa.so 00:11:29.506 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:11:29.506 SYMLINK libspdk_sock_posix.so 00:11:29.506 LIB libspdk_bdev_error.a 00:11:29.506 SO libspdk_bdev_error.so.6.0 00:11:29.506 LIB libspdk_bdev_gpt.a 00:11:29.506 LIB libspdk_bdev_delay.a 00:11:29.506 CC module/bdev/passthru/vbdev_passthru.o 00:11:29.506 CC module/bdev/null/bdev_null.o 00:11:29.765 CC module/bdev/nvme/bdev_nvme.o 00:11:29.765 LIB libspdk_bdev_malloc.a 00:11:29.765 SO libspdk_bdev_gpt.so.6.0 00:11:29.765 CC module/bdev/null/bdev_null_rpc.o 00:11:29.765 SO libspdk_bdev_delay.so.6.0 00:11:29.765 CC module/bdev/raid/bdev_raid.o 00:11:29.765 SYMLINK libspdk_bdev_error.so 00:11:29.765 CC module/bdev/raid/bdev_raid_rpc.o 00:11:29.765 SO libspdk_bdev_malloc.so.6.0 00:11:29.765 SYMLINK libspdk_bdev_gpt.so 00:11:29.765 SYMLINK libspdk_bdev_delay.so 00:11:29.765 CC module/bdev/raid/bdev_raid_sb.o 00:11:29.765 CC module/bdev/raid/raid0.o 00:11:29.765 SYMLINK libspdk_bdev_malloc.so 00:11:29.765 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:11:29.765 LIB libspdk_bdev_lvol.a 00:11:29.765 SO libspdk_bdev_lvol.so.6.0 00:11:30.023 LIB libspdk_bdev_null.a 00:11:30.023 CC module/bdev/nvme/bdev_nvme_rpc.o 00:11:30.023 SYMLINK libspdk_bdev_lvol.so 00:11:30.023 SO libspdk_bdev_null.so.6.0 00:11:30.023 CC module/bdev/raid/raid1.o 00:11:30.023 LIB libspdk_bdev_passthru.a 00:11:30.023 CC module/bdev/raid/concat.o 00:11:30.023 CC module/bdev/split/vbdev_split.o 00:11:30.023 SO libspdk_bdev_passthru.so.6.0 00:11:30.023 CC module/bdev/split/vbdev_split_rpc.o 00:11:30.023 SYMLINK libspdk_bdev_null.so 00:11:30.281 SYMLINK libspdk_bdev_passthru.so 00:11:30.281 CC module/bdev/zone_block/vbdev_zone_block.o 00:11:30.281 CC module/bdev/uring/bdev_uring.o 00:11:30.281 CC module/bdev/nvme/nvme_rpc.o 00:11:30.281 CC module/bdev/nvme/bdev_mdns_client.o 00:11:30.281 CC module/bdev/nvme/vbdev_opal.o 00:11:30.281 LIB libspdk_bdev_split.a 00:11:30.281 CC module/bdev/aio/bdev_aio.o 00:11:30.281 SO libspdk_bdev_split.so.6.0 00:11:30.540 SYMLINK libspdk_bdev_split.so 00:11:30.540 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:11:30.540 CC module/bdev/nvme/vbdev_opal_rpc.o 00:11:30.540 LIB libspdk_bdev_zone_block.a 00:11:30.799 CC module/bdev/ftl/bdev_ftl.o 00:11:30.799 SO libspdk_bdev_zone_block.so.6.0 00:11:30.799 CC module/bdev/uring/bdev_uring_rpc.o 00:11:30.799 CC module/bdev/virtio/bdev_virtio_scsi.o 00:11:30.799 CC module/bdev/iscsi/bdev_iscsi.o 00:11:30.799 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:11:30.799 SYMLINK libspdk_bdev_zone_block.so 00:11:30.799 CC module/bdev/aio/bdev_aio_rpc.o 00:11:30.799 CC module/bdev/ftl/bdev_ftl_rpc.o 00:11:30.799 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:11:30.799 LIB libspdk_bdev_uring.a 00:11:31.056 LIB libspdk_bdev_raid.a 00:11:31.056 CC module/bdev/virtio/bdev_virtio_blk.o 00:11:31.056 SO libspdk_bdev_uring.so.6.0 00:11:31.056 LIB libspdk_bdev_aio.a 00:11:31.056 SO libspdk_bdev_aio.so.6.0 00:11:31.056 SO libspdk_bdev_raid.so.6.0 00:11:31.056 SYMLINK libspdk_bdev_uring.so 00:11:31.056 CC module/bdev/virtio/bdev_virtio_rpc.o 00:11:31.056 LIB libspdk_bdev_ftl.a 00:11:31.056 SYMLINK libspdk_bdev_aio.so 00:11:31.056 SO libspdk_bdev_ftl.so.6.0 00:11:31.056 SYMLINK libspdk_bdev_raid.so 00:11:31.056 SYMLINK libspdk_bdev_ftl.so 00:11:31.314 LIB libspdk_bdev_iscsi.a 00:11:31.314 SO libspdk_bdev_iscsi.so.6.0 00:11:31.314 SYMLINK libspdk_bdev_iscsi.so 00:11:31.314 LIB libspdk_bdev_virtio.a 00:11:31.628 SO libspdk_bdev_virtio.so.6.0 00:11:31.628 SYMLINK libspdk_bdev_virtio.so 00:11:32.566 LIB libspdk_bdev_nvme.a 00:11:32.566 SO libspdk_bdev_nvme.so.7.0 00:11:32.825 SYMLINK libspdk_bdev_nvme.so 00:11:33.394 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:11:33.394 CC module/event/subsystems/keyring/keyring.o 00:11:33.394 CC module/event/subsystems/sock/sock.o 00:11:33.394 CC module/event/subsystems/vmd/vmd.o 00:11:33.394 CC module/event/subsystems/scheduler/scheduler.o 00:11:33.394 CC module/event/subsystems/vmd/vmd_rpc.o 00:11:33.394 CC module/event/subsystems/iobuf/iobuf.o 00:11:33.394 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:11:33.394 LIB libspdk_event_keyring.a 00:11:33.394 LIB libspdk_event_sock.a 00:11:33.394 LIB libspdk_event_vhost_blk.a 00:11:33.394 LIB libspdk_event_scheduler.a 00:11:33.394 LIB libspdk_event_vmd.a 00:11:33.394 SO libspdk_event_keyring.so.1.0 00:11:33.394 LIB libspdk_event_iobuf.a 00:11:33.394 SO libspdk_event_sock.so.5.0 00:11:33.394 SO libspdk_event_vhost_blk.so.3.0 00:11:33.394 SO libspdk_event_scheduler.so.4.0 00:11:33.394 SO libspdk_event_vmd.so.6.0 00:11:33.394 SO libspdk_event_iobuf.so.3.0 00:11:33.653 SYMLINK libspdk_event_sock.so 00:11:33.653 SYMLINK libspdk_event_vhost_blk.so 00:11:33.653 SYMLINK libspdk_event_keyring.so 00:11:33.653 SYMLINK libspdk_event_scheduler.so 00:11:33.653 SYMLINK libspdk_event_vmd.so 00:11:33.653 SYMLINK libspdk_event_iobuf.so 00:11:33.913 CC module/event/subsystems/accel/accel.o 00:11:33.913 LIB libspdk_event_accel.a 00:11:34.172 SO libspdk_event_accel.so.6.0 00:11:34.172 SYMLINK libspdk_event_accel.so 00:11:34.431 CC module/event/subsystems/bdev/bdev.o 00:11:34.690 LIB libspdk_event_bdev.a 00:11:34.690 SO libspdk_event_bdev.so.6.0 00:11:34.690 SYMLINK libspdk_event_bdev.so 00:11:34.949 CC module/event/subsystems/scsi/scsi.o 00:11:34.949 CC module/event/subsystems/nbd/nbd.o 00:11:34.949 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:11:34.949 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:11:34.949 CC module/event/subsystems/ublk/ublk.o 00:11:35.208 LIB libspdk_event_nbd.a 00:11:35.208 LIB libspdk_event_ublk.a 00:11:35.208 LIB libspdk_event_scsi.a 00:11:35.208 SO libspdk_event_nbd.so.6.0 00:11:35.208 SO libspdk_event_ublk.so.3.0 00:11:35.208 SO libspdk_event_scsi.so.6.0 00:11:35.208 SYMLINK libspdk_event_ublk.so 00:11:35.208 SYMLINK libspdk_event_scsi.so 00:11:35.208 LIB libspdk_event_nvmf.a 00:11:35.467 SYMLINK libspdk_event_nbd.so 00:11:35.467 SO libspdk_event_nvmf.so.6.0 00:11:35.467 SYMLINK libspdk_event_nvmf.so 00:11:35.467 CC module/event/subsystems/iscsi/iscsi.o 00:11:35.467 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:11:35.726 LIB libspdk_event_vhost_scsi.a 00:11:35.726 LIB libspdk_event_iscsi.a 00:11:35.726 SO libspdk_event_vhost_scsi.so.3.0 00:11:35.726 SO libspdk_event_iscsi.so.6.0 00:11:36.016 SYMLINK libspdk_event_vhost_scsi.so 00:11:36.016 SYMLINK libspdk_event_iscsi.so 00:11:36.016 SO libspdk.so.6.0 00:11:36.016 SYMLINK libspdk.so 00:11:36.275 CXX app/trace/trace.o 00:11:36.275 CC examples/vmd/lsvmd/lsvmd.o 00:11:36.275 CC examples/ioat/perf/perf.o 00:11:36.534 CC examples/sock/hello_world/hello_sock.o 00:11:36.534 CC examples/accel/perf/accel_perf.o 00:11:36.534 CC examples/nvme/hello_world/hello_world.o 00:11:36.534 CC examples/nvmf/nvmf/nvmf.o 00:11:36.534 CC examples/blob/hello_world/hello_blob.o 00:11:36.534 CC examples/bdev/hello_world/hello_bdev.o 00:11:36.534 CC test/accel/dif/dif.o 00:11:36.534 LINK lsvmd 00:11:36.534 LINK ioat_perf 00:11:36.792 LINK hello_sock 00:11:36.792 LINK hello_world 00:11:36.792 LINK hello_bdev 00:11:36.792 LINK hello_blob 00:11:36.792 LINK spdk_trace 00:11:36.792 LINK nvmf 00:11:36.792 CC examples/vmd/led/led.o 00:11:36.792 CC examples/ioat/verify/verify.o 00:11:37.050 LINK dif 00:11:37.050 CC examples/nvme/reconnect/reconnect.o 00:11:37.050 LINK led 00:11:37.050 CC examples/util/zipf/zipf.o 00:11:37.050 CC examples/bdev/bdevperf/bdevperf.o 00:11:37.050 LINK accel_perf 00:11:37.050 CC app/trace_record/trace_record.o 00:11:37.050 CC examples/blob/cli/blobcli.o 00:11:37.050 LINK verify 00:11:37.308 LINK zipf 00:11:37.308 CC examples/thread/thread/thread_ex.o 00:11:37.308 CC examples/idxd/perf/perf.o 00:11:37.308 CC test/app/histogram_perf/histogram_perf.o 00:11:37.308 LINK spdk_trace_record 00:11:37.308 CC test/app/bdev_svc/bdev_svc.o 00:11:37.567 LINK reconnect 00:11:37.567 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:11:37.567 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:11:37.567 LINK thread 00:11:37.567 LINK histogram_perf 00:11:37.567 LINK bdev_svc 00:11:37.826 CC app/nvmf_tgt/nvmf_main.o 00:11:37.826 CC examples/nvme/nvme_manage/nvme_manage.o 00:11:37.826 LINK blobcli 00:11:37.826 LINK idxd_perf 00:11:37.826 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:11:37.826 CC test/app/jsoncat/jsoncat.o 00:11:37.826 LINK nvmf_tgt 00:11:38.085 CC test/app/stub/stub.o 00:11:38.085 LINK nvme_fuzz 00:11:38.085 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:11:38.085 LINK jsoncat 00:11:38.085 CC examples/nvme/arbitration/arbitration.o 00:11:38.085 CC examples/nvme/hotplug/hotplug.o 00:11:38.085 LINK bdevperf 00:11:38.085 LINK stub 00:11:38.344 CC app/iscsi_tgt/iscsi_tgt.o 00:11:38.344 CC examples/nvme/cmb_copy/cmb_copy.o 00:11:38.344 LINK hotplug 00:11:38.344 CC app/spdk_tgt/spdk_tgt.o 00:11:38.344 CC app/spdk_lspci/spdk_lspci.o 00:11:38.344 LINK arbitration 00:11:38.344 LINK nvme_manage 00:11:38.344 CC app/spdk_nvme_perf/perf.o 00:11:38.344 LINK iscsi_tgt 00:11:38.603 LINK vhost_fuzz 00:11:38.603 LINK cmb_copy 00:11:38.603 LINK spdk_lspci 00:11:38.603 CC examples/nvme/abort/abort.o 00:11:38.603 LINK spdk_tgt 00:11:38.603 CC app/spdk_nvme_discover/discovery_aer.o 00:11:38.862 CC app/spdk_nvme_identify/identify.o 00:11:38.862 CC app/spdk_top/spdk_top.o 00:11:38.862 CC examples/interrupt_tgt/interrupt_tgt.o 00:11:38.862 CC test/bdev/bdevio/bdevio.o 00:11:38.862 CC app/vhost/vhost.o 00:11:38.862 CC app/spdk_dd/spdk_dd.o 00:11:38.862 LINK spdk_nvme_discover 00:11:39.122 LINK interrupt_tgt 00:11:39.122 LINK abort 00:11:39.122 LINK vhost 00:11:39.382 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:11:39.382 CC app/fio/nvme/fio_plugin.o 00:11:39.382 LINK bdevio 00:11:39.382 LINK spdk_dd 00:11:39.382 CC app/fio/bdev/fio_plugin.o 00:11:39.382 CC test/blobfs/mkfs/mkfs.o 00:11:39.382 LINK pmr_persistence 00:11:39.641 LINK spdk_nvme_perf 00:11:39.641 LINK iscsi_fuzz 00:11:39.641 TEST_HEADER include/spdk/accel.h 00:11:39.641 TEST_HEADER include/spdk/accel_module.h 00:11:39.641 TEST_HEADER include/spdk/assert.h 00:11:39.641 TEST_HEADER include/spdk/barrier.h 00:11:39.641 TEST_HEADER include/spdk/base64.h 00:11:39.641 TEST_HEADER include/spdk/bdev.h 00:11:39.641 TEST_HEADER include/spdk/bdev_module.h 00:11:39.641 TEST_HEADER include/spdk/bdev_zone.h 00:11:39.641 TEST_HEADER include/spdk/bit_array.h 00:11:39.641 TEST_HEADER include/spdk/bit_pool.h 00:11:39.641 TEST_HEADER include/spdk/blob_bdev.h 00:11:39.641 TEST_HEADER include/spdk/blobfs_bdev.h 00:11:39.641 TEST_HEADER include/spdk/blobfs.h 00:11:39.641 TEST_HEADER include/spdk/blob.h 00:11:39.641 TEST_HEADER include/spdk/conf.h 00:11:39.641 TEST_HEADER include/spdk/config.h 00:11:39.641 LINK mkfs 00:11:39.641 TEST_HEADER include/spdk/cpuset.h 00:11:39.641 TEST_HEADER include/spdk/crc16.h 00:11:39.641 TEST_HEADER include/spdk/crc32.h 00:11:39.641 TEST_HEADER include/spdk/crc64.h 00:11:39.642 TEST_HEADER include/spdk/dif.h 00:11:39.642 TEST_HEADER include/spdk/dma.h 00:11:39.642 TEST_HEADER include/spdk/endian.h 00:11:39.642 TEST_HEADER include/spdk/env_dpdk.h 00:11:39.642 TEST_HEADER include/spdk/env.h 00:11:39.642 TEST_HEADER include/spdk/event.h 00:11:39.642 TEST_HEADER include/spdk/fd_group.h 00:11:39.642 TEST_HEADER include/spdk/fd.h 00:11:39.642 TEST_HEADER include/spdk/file.h 00:11:39.642 TEST_HEADER include/spdk/ftl.h 00:11:39.642 TEST_HEADER include/spdk/gpt_spec.h 00:11:39.642 TEST_HEADER include/spdk/hexlify.h 00:11:39.642 LINK spdk_nvme_identify 00:11:39.642 TEST_HEADER include/spdk/histogram_data.h 00:11:39.642 TEST_HEADER include/spdk/idxd.h 00:11:39.642 TEST_HEADER include/spdk/idxd_spec.h 00:11:39.642 TEST_HEADER include/spdk/init.h 00:11:39.642 TEST_HEADER include/spdk/ioat.h 00:11:39.642 TEST_HEADER include/spdk/ioat_spec.h 00:11:39.642 TEST_HEADER include/spdk/iscsi_spec.h 00:11:39.642 TEST_HEADER include/spdk/json.h 00:11:39.642 TEST_HEADER include/spdk/jsonrpc.h 00:11:39.642 TEST_HEADER include/spdk/keyring.h 00:11:39.642 TEST_HEADER include/spdk/keyring_module.h 00:11:39.642 TEST_HEADER include/spdk/likely.h 00:11:39.642 TEST_HEADER include/spdk/log.h 00:11:39.642 TEST_HEADER include/spdk/lvol.h 00:11:39.642 TEST_HEADER include/spdk/memory.h 00:11:39.642 TEST_HEADER include/spdk/mmio.h 00:11:39.642 TEST_HEADER include/spdk/nbd.h 00:11:39.900 TEST_HEADER include/spdk/notify.h 00:11:39.900 TEST_HEADER include/spdk/nvme.h 00:11:39.900 TEST_HEADER include/spdk/nvme_intel.h 00:11:39.900 TEST_HEADER include/spdk/nvme_ocssd.h 00:11:39.900 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:11:39.900 TEST_HEADER include/spdk/nvme_spec.h 00:11:39.900 TEST_HEADER include/spdk/nvme_zns.h 00:11:39.900 TEST_HEADER include/spdk/nvmf_cmd.h 00:11:39.900 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:11:39.900 TEST_HEADER include/spdk/nvmf.h 00:11:39.900 TEST_HEADER include/spdk/nvmf_spec.h 00:11:39.900 TEST_HEADER include/spdk/nvmf_transport.h 00:11:39.900 TEST_HEADER include/spdk/opal.h 00:11:39.900 LINK spdk_top 00:11:39.900 TEST_HEADER include/spdk/opal_spec.h 00:11:39.900 TEST_HEADER include/spdk/pci_ids.h 00:11:39.900 TEST_HEADER include/spdk/pipe.h 00:11:39.900 TEST_HEADER include/spdk/queue.h 00:11:39.900 TEST_HEADER include/spdk/reduce.h 00:11:39.900 TEST_HEADER include/spdk/rpc.h 00:11:39.900 TEST_HEADER include/spdk/scheduler.h 00:11:39.900 TEST_HEADER include/spdk/scsi.h 00:11:39.900 TEST_HEADER include/spdk/scsi_spec.h 00:11:39.900 TEST_HEADER include/spdk/sock.h 00:11:39.900 TEST_HEADER include/spdk/stdinc.h 00:11:39.900 TEST_HEADER include/spdk/string.h 00:11:39.900 TEST_HEADER include/spdk/thread.h 00:11:39.900 TEST_HEADER include/spdk/trace.h 00:11:39.900 TEST_HEADER include/spdk/trace_parser.h 00:11:39.900 TEST_HEADER include/spdk/tree.h 00:11:39.900 TEST_HEADER include/spdk/ublk.h 00:11:39.900 TEST_HEADER include/spdk/util.h 00:11:39.900 TEST_HEADER include/spdk/uuid.h 00:11:39.900 CC test/dma/test_dma/test_dma.o 00:11:39.900 TEST_HEADER include/spdk/version.h 00:11:39.900 TEST_HEADER include/spdk/vfio_user_pci.h 00:11:39.900 TEST_HEADER include/spdk/vfio_user_spec.h 00:11:39.900 TEST_HEADER include/spdk/vhost.h 00:11:39.900 TEST_HEADER include/spdk/vmd.h 00:11:39.900 TEST_HEADER include/spdk/xor.h 00:11:39.900 TEST_HEADER include/spdk/zipf.h 00:11:39.900 CXX test/cpp_headers/accel.o 00:11:39.900 CC test/env/mem_callbacks/mem_callbacks.o 00:11:39.900 CC test/event/event_perf/event_perf.o 00:11:39.900 CC test/event/reactor/reactor.o 00:11:39.900 LINK spdk_nvme 00:11:40.159 CC test/event/reactor_perf/reactor_perf.o 00:11:40.159 CXX test/cpp_headers/accel_module.o 00:11:40.159 LINK event_perf 00:11:40.159 LINK reactor 00:11:40.159 CC test/nvme/aer/aer.o 00:11:40.159 LINK spdk_bdev 00:11:40.159 CC test/lvol/esnap/esnap.o 00:11:40.159 LINK reactor_perf 00:11:40.159 CC test/nvme/reset/reset.o 00:11:40.159 CXX test/cpp_headers/assert.o 00:11:40.417 LINK test_dma 00:11:40.417 CC test/nvme/overhead/overhead.o 00:11:40.417 CC test/nvme/e2edp/nvme_dp.o 00:11:40.417 CC test/nvme/sgl/sgl.o 00:11:40.417 LINK aer 00:11:40.417 CXX test/cpp_headers/barrier.o 00:11:40.417 CC test/event/app_repeat/app_repeat.o 00:11:40.675 LINK mem_callbacks 00:11:40.675 LINK reset 00:11:40.675 CC test/rpc_client/rpc_client_test.o 00:11:40.675 LINK app_repeat 00:11:40.675 CXX test/cpp_headers/base64.o 00:11:40.675 LINK sgl 00:11:40.675 LINK overhead 00:11:40.675 LINK nvme_dp 00:11:40.675 CC test/env/vtophys/vtophys.o 00:11:40.675 CC test/thread/poller_perf/poller_perf.o 00:11:40.934 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:11:40.934 LINK rpc_client_test 00:11:40.934 CXX test/cpp_headers/bdev.o 00:11:40.934 LINK vtophys 00:11:40.934 CC test/nvme/err_injection/err_injection.o 00:11:40.934 LINK poller_perf 00:11:40.934 CC test/env/memory/memory_ut.o 00:11:40.934 CC test/nvme/startup/startup.o 00:11:40.934 CC test/event/scheduler/scheduler.o 00:11:40.934 LINK env_dpdk_post_init 00:11:41.222 CXX test/cpp_headers/bdev_module.o 00:11:41.222 CXX test/cpp_headers/bdev_zone.o 00:11:41.222 CC test/nvme/reserve/reserve.o 00:11:41.222 LINK startup 00:11:41.222 LINK err_injection 00:11:41.222 CC test/env/pci/pci_ut.o 00:11:41.222 LINK scheduler 00:11:41.222 CC test/nvme/simple_copy/simple_copy.o 00:11:41.222 CXX test/cpp_headers/bit_array.o 00:11:41.222 CXX test/cpp_headers/bit_pool.o 00:11:41.480 LINK reserve 00:11:41.480 CXX test/cpp_headers/blob_bdev.o 00:11:41.480 CC test/nvme/connect_stress/connect_stress.o 00:11:41.480 CXX test/cpp_headers/blobfs_bdev.o 00:11:41.480 CXX test/cpp_headers/blobfs.o 00:11:41.480 LINK simple_copy 00:11:41.480 CC test/nvme/boot_partition/boot_partition.o 00:11:41.480 CXX test/cpp_headers/blob.o 00:11:41.739 CC test/nvme/compliance/nvme_compliance.o 00:11:41.739 LINK connect_stress 00:11:41.739 LINK pci_ut 00:11:41.739 CXX test/cpp_headers/conf.o 00:11:41.739 CXX test/cpp_headers/config.o 00:11:41.739 LINK boot_partition 00:11:41.739 CXX test/cpp_headers/cpuset.o 00:11:41.739 CC test/nvme/doorbell_aers/doorbell_aers.o 00:11:41.739 CC test/nvme/fused_ordering/fused_ordering.o 00:11:41.999 CXX test/cpp_headers/crc16.o 00:11:41.999 CC test/nvme/fdp/fdp.o 00:11:41.999 CXX test/cpp_headers/crc32.o 00:11:41.999 CC test/nvme/cuse/cuse.o 00:11:41.999 LINK memory_ut 00:11:41.999 CXX test/cpp_headers/crc64.o 00:11:41.999 LINK doorbell_aers 00:11:41.999 CXX test/cpp_headers/dif.o 00:11:41.999 LINK nvme_compliance 00:11:41.999 CXX test/cpp_headers/dma.o 00:11:41.999 LINK fused_ordering 00:11:42.257 CXX test/cpp_headers/endian.o 00:11:42.257 CXX test/cpp_headers/env_dpdk.o 00:11:42.257 CXX test/cpp_headers/env.o 00:11:42.257 CXX test/cpp_headers/event.o 00:11:42.257 CXX test/cpp_headers/fd_group.o 00:11:42.257 CXX test/cpp_headers/fd.o 00:11:42.257 LINK fdp 00:11:42.257 CXX test/cpp_headers/file.o 00:11:42.257 CXX test/cpp_headers/ftl.o 00:11:42.257 CXX test/cpp_headers/gpt_spec.o 00:11:42.515 CXX test/cpp_headers/hexlify.o 00:11:42.515 CXX test/cpp_headers/histogram_data.o 00:11:42.515 CXX test/cpp_headers/idxd.o 00:11:42.515 CXX test/cpp_headers/idxd_spec.o 00:11:42.515 CXX test/cpp_headers/init.o 00:11:42.515 CXX test/cpp_headers/ioat.o 00:11:42.515 CXX test/cpp_headers/ioat_spec.o 00:11:42.515 CXX test/cpp_headers/iscsi_spec.o 00:11:42.515 CXX test/cpp_headers/json.o 00:11:42.515 CXX test/cpp_headers/jsonrpc.o 00:11:42.515 CXX test/cpp_headers/keyring.o 00:11:42.774 CXX test/cpp_headers/keyring_module.o 00:11:42.774 CXX test/cpp_headers/likely.o 00:11:42.774 CXX test/cpp_headers/log.o 00:11:42.774 CXX test/cpp_headers/lvol.o 00:11:42.774 CXX test/cpp_headers/memory.o 00:11:42.774 CXX test/cpp_headers/mmio.o 00:11:42.774 CXX test/cpp_headers/nbd.o 00:11:42.774 CXX test/cpp_headers/notify.o 00:11:42.774 CXX test/cpp_headers/nvme.o 00:11:42.774 CXX test/cpp_headers/nvme_intel.o 00:11:42.774 CXX test/cpp_headers/nvme_ocssd.o 00:11:42.774 CXX test/cpp_headers/nvme_ocssd_spec.o 00:11:43.033 CXX test/cpp_headers/nvme_spec.o 00:11:43.033 CXX test/cpp_headers/nvme_zns.o 00:11:43.033 CXX test/cpp_headers/nvmf_cmd.o 00:11:43.033 CXX test/cpp_headers/nvmf_fc_spec.o 00:11:43.033 CXX test/cpp_headers/nvmf.o 00:11:43.033 CXX test/cpp_headers/nvmf_spec.o 00:11:43.033 CXX test/cpp_headers/nvmf_transport.o 00:11:43.033 CXX test/cpp_headers/opal.o 00:11:43.033 CXX test/cpp_headers/opal_spec.o 00:11:43.293 CXX test/cpp_headers/pci_ids.o 00:11:43.293 CXX test/cpp_headers/pipe.o 00:11:43.293 CXX test/cpp_headers/queue.o 00:11:43.293 LINK cuse 00:11:43.293 CXX test/cpp_headers/reduce.o 00:11:43.293 CXX test/cpp_headers/rpc.o 00:11:43.293 CXX test/cpp_headers/scheduler.o 00:11:43.293 CXX test/cpp_headers/scsi.o 00:11:43.293 CXX test/cpp_headers/scsi_spec.o 00:11:43.293 CXX test/cpp_headers/sock.o 00:11:43.293 CXX test/cpp_headers/stdinc.o 00:11:43.293 CXX test/cpp_headers/string.o 00:11:43.293 CXX test/cpp_headers/thread.o 00:11:43.552 CXX test/cpp_headers/trace.o 00:11:43.552 CXX test/cpp_headers/trace_parser.o 00:11:43.552 CXX test/cpp_headers/tree.o 00:11:43.552 CXX test/cpp_headers/ublk.o 00:11:43.552 CXX test/cpp_headers/util.o 00:11:43.552 CXX test/cpp_headers/uuid.o 00:11:43.552 CXX test/cpp_headers/version.o 00:11:43.552 CXX test/cpp_headers/vfio_user_pci.o 00:11:43.552 CXX test/cpp_headers/vfio_user_spec.o 00:11:43.552 CXX test/cpp_headers/vhost.o 00:11:43.552 CXX test/cpp_headers/vmd.o 00:11:43.552 CXX test/cpp_headers/xor.o 00:11:43.552 CXX test/cpp_headers/zipf.o 00:11:46.840 LINK esnap 00:11:46.840 00:11:46.840 real 1m19.811s 00:11:46.840 user 7m32.443s 00:11:46.840 sys 1m46.884s 00:11:46.840 09:40:10 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:11:46.840 09:40:10 -- common/autotest_common.sh@10 -- $ set +x 00:11:46.840 ************************************ 00:11:46.840 END TEST make 00:11:46.840 ************************************ 00:11:46.840 09:40:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:11:46.840 09:40:10 -- pm/common@30 -- $ signal_monitor_resources TERM 00:11:46.840 09:40:10 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:11:46.840 09:40:10 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:46.840 09:40:10 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:11:46.840 09:40:10 -- pm/common@45 -- $ pid=5134 00:11:46.840 09:40:10 -- pm/common@52 -- $ sudo kill -TERM 5134 00:11:47.099 09:40:10 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:47.099 09:40:10 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:11:47.099 09:40:10 -- pm/common@45 -- $ pid=5135 00:11:47.099 09:40:10 -- pm/common@52 -- $ sudo kill -TERM 5135 00:11:47.099 09:40:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.099 09:40:10 -- nvmf/common.sh@7 -- # uname -s 00:11:47.099 09:40:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.099 09:40:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.099 09:40:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.099 09:40:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.099 09:40:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.099 09:40:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.099 09:40:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.099 09:40:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.099 09:40:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.099 09:40:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.099 09:40:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:11:47.099 09:40:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:11:47.099 09:40:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.099 09:40:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.099 09:40:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.099 09:40:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.099 09:40:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.099 09:40:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.099 09:40:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.099 09:40:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.099 09:40:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.099 09:40:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.099 09:40:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.099 09:40:10 -- paths/export.sh@5 -- # export PATH 00:11:47.099 09:40:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.099 09:40:10 -- nvmf/common.sh@47 -- # : 0 00:11:47.099 09:40:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.099 09:40:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.099 09:40:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.099 09:40:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.099 09:40:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.099 09:40:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.099 09:40:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.099 09:40:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.099 09:40:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:11:47.099 09:40:10 -- spdk/autotest.sh@32 -- # uname -s 00:11:47.099 09:40:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:11:47.099 09:40:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:11:47.099 09:40:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:47.099 09:40:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:11:47.099 09:40:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:47.099 09:40:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:11:47.099 09:40:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:11:47.099 09:40:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:11:47.099 09:40:10 -- spdk/autotest.sh@48 -- # udevadm_pid=52335 00:11:47.099 09:40:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:11:47.099 09:40:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:11:47.099 09:40:10 -- pm/common@17 -- # local monitor 00:11:47.099 09:40:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:47.099 09:40:10 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52337 00:11:47.099 09:40:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:47.099 09:40:10 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52341 00:11:47.099 09:40:10 -- pm/common@26 -- # sleep 1 00:11:47.099 09:40:10 -- pm/common@21 -- # date +%s 00:11:47.099 09:40:10 -- pm/common@21 -- # date +%s 00:11:47.099 09:40:10 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713433210 00:11:47.358 09:40:10 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713433210 00:11:47.358 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713433210_collect-vmstat.pm.log 00:11:47.358 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713433210_collect-cpu-load.pm.log 00:11:48.294 09:40:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:11:48.294 09:40:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:11:48.294 09:40:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:48.294 09:40:11 -- common/autotest_common.sh@10 -- # set +x 00:11:48.294 09:40:11 -- spdk/autotest.sh@59 -- # create_test_list 00:11:48.294 09:40:11 -- common/autotest_common.sh@734 -- # xtrace_disable 00:11:48.294 09:40:11 -- common/autotest_common.sh@10 -- # set +x 00:11:48.294 09:40:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:11:48.294 09:40:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:11:48.294 09:40:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:11:48.294 09:40:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:11:48.294 09:40:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:11:48.294 09:40:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:11:48.294 09:40:11 -- common/autotest_common.sh@1441 -- # uname 00:11:48.294 09:40:11 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:11:48.294 09:40:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:11:48.294 09:40:11 -- common/autotest_common.sh@1461 -- # uname 00:11:48.294 09:40:11 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:11:48.294 09:40:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:11:48.294 09:40:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:11:48.294 09:40:11 -- spdk/autotest.sh@72 -- # hash lcov 00:11:48.294 09:40:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:11:48.294 09:40:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:11:48.294 --rc lcov_branch_coverage=1 00:11:48.294 --rc lcov_function_coverage=1 00:11:48.294 --rc genhtml_branch_coverage=1 00:11:48.294 --rc genhtml_function_coverage=1 00:11:48.294 --rc genhtml_legend=1 00:11:48.294 --rc geninfo_all_blocks=1 00:11:48.294 ' 00:11:48.294 09:40:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:11:48.294 --rc lcov_branch_coverage=1 00:11:48.294 --rc lcov_function_coverage=1 00:11:48.294 --rc genhtml_branch_coverage=1 00:11:48.294 --rc genhtml_function_coverage=1 00:11:48.294 --rc genhtml_legend=1 00:11:48.294 --rc geninfo_all_blocks=1 00:11:48.294 ' 00:11:48.294 09:40:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:11:48.294 --rc lcov_branch_coverage=1 00:11:48.294 --rc lcov_function_coverage=1 00:11:48.294 --rc genhtml_branch_coverage=1 00:11:48.294 --rc genhtml_function_coverage=1 00:11:48.294 --rc genhtml_legend=1 00:11:48.294 --rc geninfo_all_blocks=1 00:11:48.294 --no-external' 00:11:48.294 09:40:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:11:48.294 --rc lcov_branch_coverage=1 00:11:48.294 --rc lcov_function_coverage=1 00:11:48.294 --rc genhtml_branch_coverage=1 00:11:48.294 --rc genhtml_function_coverage=1 00:11:48.294 --rc genhtml_legend=1 00:11:48.294 --rc geninfo_all_blocks=1 00:11:48.294 --no-external' 00:11:48.294 09:40:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:11:48.294 lcov: LCOV version 1.14 00:11:48.294 09:40:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:11:56.409 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:11:56.409 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:11:56.409 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:11:56.409 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:11:56.409 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:11:56.409 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:12:02.978 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:12:02.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:12:17.868 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:12:17.868 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:12:17.869 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:12:17.869 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:12:17.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:12:17.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:12:19.787 09:40:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:12:19.787 09:40:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:19.787 09:40:42 -- common/autotest_common.sh@10 -- # set +x 00:12:19.787 09:40:42 -- spdk/autotest.sh@91 -- # rm -f 00:12:19.787 09:40:42 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:20.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.355 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:12:20.355 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:12:20.355 09:40:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:12:20.355 09:40:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:20.355 09:40:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:20.355 09:40:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:20.355 09:40:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.355 09:40:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:20.355 09:40:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:20.355 09:40:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.355 09:40:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:20.355 09:40:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:20.355 09:40:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.355 09:40:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:12:20.355 09:40:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:12:20.355 09:40:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.355 09:40:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:12:20.355 09:40:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:12:20.355 09:40:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:20.355 09:40:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.355 09:40:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:12:20.355 09:40:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:20.355 09:40:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:20.355 09:40:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:12:20.355 09:40:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:12:20.355 09:40:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:20.355 No valid GPT data, bailing 00:12:20.355 09:40:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:20.355 09:40:43 -- scripts/common.sh@391 -- # pt= 00:12:20.355 09:40:43 -- scripts/common.sh@392 -- # return 1 00:12:20.355 09:40:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:12:20.355 1+0 records in 00:12:20.355 1+0 records out 00:12:20.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557551 s, 188 MB/s 00:12:20.355 09:40:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:20.355 09:40:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:20.355 09:40:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:12:20.355 09:40:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:12:20.355 09:40:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:12:20.614 No valid GPT data, bailing 00:12:20.614 09:40:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:20.614 09:40:43 -- scripts/common.sh@391 -- # pt= 00:12:20.614 09:40:43 -- scripts/common.sh@392 -- # return 1 00:12:20.614 09:40:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:12:20.614 1+0 records in 00:12:20.614 1+0 records out 00:12:20.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530212 s, 198 MB/s 00:12:20.614 09:40:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:20.614 09:40:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:20.614 09:40:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:12:20.614 09:40:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:12:20.614 09:40:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:12:20.614 No valid GPT data, bailing 00:12:20.614 09:40:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:12:20.614 09:40:43 -- scripts/common.sh@391 -- # pt= 00:12:20.614 09:40:43 -- scripts/common.sh@392 -- # return 1 00:12:20.614 09:40:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:12:20.614 1+0 records in 00:12:20.614 1+0 records out 00:12:20.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530651 s, 198 MB/s 00:12:20.614 09:40:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:20.614 09:40:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:20.614 09:40:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:12:20.614 09:40:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:12:20.614 09:40:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:12:20.614 No valid GPT data, bailing 00:12:20.614 09:40:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:12:20.614 09:40:43 -- scripts/common.sh@391 -- # pt= 00:12:20.614 09:40:43 -- scripts/common.sh@392 -- # return 1 00:12:20.614 09:40:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:12:20.614 1+0 records in 00:12:20.614 1+0 records out 00:12:20.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547794 s, 191 MB/s 00:12:20.614 09:40:43 -- spdk/autotest.sh@118 -- # sync 00:12:20.873 09:40:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:12:20.873 09:40:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:12:20.873 09:40:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:12:22.853 09:40:45 -- spdk/autotest.sh@124 -- # uname -s 00:12:22.853 09:40:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:12:22.853 09:40:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:22.853 09:40:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:22.853 09:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.853 09:40:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 ************************************ 00:12:22.853 START TEST setup.sh 00:12:22.853 ************************************ 00:12:22.853 09:40:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:22.853 * Looking for test storage... 00:12:22.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:22.853 09:40:46 -- setup/test-setup.sh@10 -- # uname -s 00:12:22.853 09:40:46 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:12:22.853 09:40:46 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:22.853 09:40:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:22.853 09:40:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.853 09:40:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 ************************************ 00:12:22.853 START TEST acl 00:12:22.853 ************************************ 00:12:22.853 09:40:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:22.853 * Looking for test storage... 00:12:22.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:22.853 09:40:46 -- setup/acl.sh@10 -- # get_zoned_devs 00:12:22.853 09:40:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:22.853 09:40:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:22.853 09:40:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:22.853 09:40:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:22.853 09:40:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:22.853 09:40:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:22.853 09:40:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:22.853 09:40:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:22.853 09:40:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:22.853 09:40:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:22.853 09:40:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:12:22.853 09:40:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:12:22.853 09:40:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:22.853 09:40:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:12:22.853 09:40:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:12:22.853 09:40:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:22.853 09:40:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:22.853 09:40:46 -- setup/acl.sh@12 -- # devs=() 00:12:22.853 09:40:46 -- setup/acl.sh@12 -- # declare -a devs 00:12:22.853 09:40:46 -- setup/acl.sh@13 -- # drivers=() 00:12:22.853 09:40:46 -- setup/acl.sh@13 -- # declare -A drivers 00:12:22.853 09:40:46 -- setup/acl.sh@51 -- # setup reset 00:12:22.853 09:40:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:22.853 09:40:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:23.788 09:40:46 -- setup/acl.sh@52 -- # collect_setup_devs 00:12:23.788 09:40:46 -- setup/acl.sh@16 -- # local dev driver 00:12:23.788 09:40:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:23.788 09:40:46 -- setup/acl.sh@15 -- # setup output status 00:12:23.788 09:40:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:23.788 09:40:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # continue 00:12:24.356 09:40:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:24.356 Hugepages 00:12:24.356 node hugesize free / total 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # continue 00:12:24.356 09:40:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:24.356 00:12:24.356 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # continue 00:12:24.356 09:40:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:24.356 09:40:47 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:12:24.356 09:40:47 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:12:24.356 09:40:47 -- setup/acl.sh@20 -- # continue 00:12:24.356 09:40:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:24.616 09:40:47 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:12:24.616 09:40:47 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:24.616 09:40:47 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:24.616 09:40:47 -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:24.616 09:40:47 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:24.616 09:40:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:24.616 09:40:47 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:12:24.616 09:40:47 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:24.616 09:40:47 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:24.616 09:40:47 -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:24.616 09:40:47 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:24.616 09:40:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:24.616 09:40:47 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:12:24.616 09:40:47 -- setup/acl.sh@54 -- # run_test denied denied 00:12:24.616 09:40:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:24.616 09:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.616 09:40:47 -- common/autotest_common.sh@10 -- # set +x 00:12:24.616 ************************************ 00:12:24.616 START TEST denied 00:12:24.616 ************************************ 00:12:24.616 09:40:47 -- common/autotest_common.sh@1111 -- # denied 00:12:24.616 09:40:47 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:12:24.616 09:40:47 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:12:24.616 09:40:47 -- setup/acl.sh@38 -- # setup output config 00:12:24.616 09:40:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:24.616 09:40:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:25.557 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:12:25.557 09:40:48 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:12:25.557 09:40:48 -- setup/acl.sh@28 -- # local dev driver 00:12:25.557 09:40:48 -- setup/acl.sh@30 -- # for dev in "$@" 00:12:25.557 09:40:48 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:12:25.557 09:40:48 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:12:25.557 09:40:48 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:25.557 09:40:48 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:25.557 09:40:48 -- setup/acl.sh@41 -- # setup reset 00:12:25.557 09:40:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:25.557 09:40:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:26.125 00:12:26.125 real 0m1.532s 00:12:26.125 user 0m0.594s 00:12:26.125 sys 0m0.860s 00:12:26.125 09:40:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:26.125 09:40:49 -- common/autotest_common.sh@10 -- # set +x 00:12:26.125 ************************************ 00:12:26.125 END TEST denied 00:12:26.125 ************************************ 00:12:26.385 09:40:49 -- setup/acl.sh@55 -- # run_test allowed allowed 00:12:26.385 09:40:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:26.385 09:40:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.385 09:40:49 -- common/autotest_common.sh@10 -- # set +x 00:12:26.385 ************************************ 00:12:26.385 START TEST allowed 00:12:26.385 ************************************ 00:12:26.385 09:40:49 -- common/autotest_common.sh@1111 -- # allowed 00:12:26.385 09:40:49 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:12:26.385 09:40:49 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:12:26.385 09:40:49 -- setup/acl.sh@45 -- # setup output config 00:12:26.385 09:40:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:26.385 09:40:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:27.321 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:27.321 09:40:50 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:12:27.321 09:40:50 -- setup/acl.sh@28 -- # local dev driver 00:12:27.321 09:40:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:12:27.321 09:40:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:12:27.321 09:40:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:12:27.321 09:40:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:27.321 09:40:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:27.321 09:40:50 -- setup/acl.sh@48 -- # setup reset 00:12:27.321 09:40:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:27.321 09:40:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:27.886 00:12:27.886 real 0m1.599s 00:12:27.886 user 0m0.703s 00:12:27.886 sys 0m0.892s 00:12:27.886 09:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:27.886 ************************************ 00:12:27.886 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:27.886 END TEST allowed 00:12:27.886 ************************************ 00:12:27.886 00:12:27.886 real 0m5.131s 00:12:27.886 user 0m2.229s 00:12:27.886 sys 0m2.791s 00:12:27.886 09:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:27.886 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:27.887 ************************************ 00:12:27.887 END TEST acl 00:12:27.887 ************************************ 00:12:28.148 09:40:51 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:28.148 09:40:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.148 09:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.148 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.148 ************************************ 00:12:28.148 START TEST hugepages 00:12:28.148 ************************************ 00:12:28.148 09:40:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:28.148 * Looking for test storage... 00:12:28.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:28.148 09:40:51 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:12:28.148 09:40:51 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:12:28.148 09:40:51 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:12:28.148 09:40:51 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:12:28.148 09:40:51 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:12:28.148 09:40:51 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:12:28.148 09:40:51 -- setup/common.sh@17 -- # local get=Hugepagesize 00:12:28.148 09:40:51 -- setup/common.sh@18 -- # local node= 00:12:28.148 09:40:51 -- setup/common.sh@19 -- # local var val 00:12:28.148 09:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:12:28.148 09:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:28.148 09:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:28.148 09:40:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:28.148 09:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:12:28.148 09:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5449596 kB' 'MemAvailable: 7411976 kB' 'Buffers: 2436 kB' 'Cached: 2174828 kB' 'SwapCached: 0 kB' 'Active: 833948 kB' 'Inactive: 1449544 kB' 'Active(anon): 116716 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 107928 kB' 'Mapped: 48640 kB' 'Shmem: 10488 kB' 'KReclaimable: 65088 kB' 'Slab: 138080 kB' 'SReclaimable: 65088 kB' 'SUnreclaim: 72992 kB' 'KernelStack: 6432 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 340520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.148 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.148 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # continue 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:12:28.149 09:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:12:28.149 09:40:51 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:28.149 09:40:51 -- setup/common.sh@33 -- # echo 2048 00:12:28.149 09:40:51 -- setup/common.sh@33 -- # return 0 00:12:28.149 09:40:51 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:12:28.149 09:40:51 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:12:28.149 09:40:51 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:12:28.149 09:40:51 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:12:28.149 09:40:51 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:12:28.149 09:40:51 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:12:28.149 09:40:51 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:12:28.149 09:40:51 -- setup/hugepages.sh@207 -- # get_nodes 00:12:28.149 09:40:51 -- setup/hugepages.sh@27 -- # local node 00:12:28.149 09:40:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:28.149 09:40:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:12:28.149 09:40:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:28.149 09:40:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:28.149 09:40:51 -- setup/hugepages.sh@208 -- # clear_hp 00:12:28.149 09:40:51 -- setup/hugepages.sh@37 -- # local node hp 00:12:28.149 09:40:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:28.150 09:40:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:28.150 09:40:51 -- setup/hugepages.sh@41 -- # echo 0 00:12:28.150 09:40:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:28.150 09:40:51 -- setup/hugepages.sh@41 -- # echo 0 00:12:28.150 09:40:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:28.150 09:40:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:28.150 09:40:51 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:12:28.150 09:40:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.150 09:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.150 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 ************************************ 00:12:28.409 START TEST default_setup 00:12:28.409 ************************************ 00:12:28.409 09:40:51 -- common/autotest_common.sh@1111 -- # default_setup 00:12:28.409 09:40:51 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:12:28.409 09:40:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:12:28.409 09:40:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:28.409 09:40:51 -- setup/hugepages.sh@51 -- # shift 00:12:28.409 09:40:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:28.409 09:40:51 -- setup/hugepages.sh@52 -- # local node_ids 00:12:28.409 09:40:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:28.409 09:40:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:28.409 09:40:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:28.409 09:40:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:28.409 09:40:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:28.409 09:40:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:28.409 09:40:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:28.409 09:40:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:28.409 09:40:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:28.409 09:40:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:28.409 09:40:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:28.409 09:40:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:28.409 09:40:51 -- setup/hugepages.sh@73 -- # return 0 00:12:28.409 09:40:51 -- setup/hugepages.sh@137 -- # setup output 00:12:28.409 09:40:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:28.409 09:40:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:28.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:29.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:29.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:29.239 09:40:52 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:12:29.239 09:40:52 -- setup/hugepages.sh@89 -- # local node 00:12:29.239 09:40:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:29.239 09:40:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:29.239 09:40:52 -- setup/hugepages.sh@92 -- # local surp 00:12:29.239 09:40:52 -- setup/hugepages.sh@93 -- # local resv 00:12:29.239 09:40:52 -- setup/hugepages.sh@94 -- # local anon 00:12:29.239 09:40:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:29.239 09:40:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:29.239 09:40:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:29.239 09:40:52 -- setup/common.sh@18 -- # local node= 00:12:29.239 09:40:52 -- setup/common.sh@19 -- # local var val 00:12:29.239 09:40:52 -- setup/common.sh@20 -- # local mem_f mem 00:12:29.239 09:40:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:29.239 09:40:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:29.239 09:40:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:29.239 09:40:52 -- setup/common.sh@28 -- # mapfile -t mem 00:12:29.239 09:40:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.239 09:40:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7535812 kB' 'MemAvailable: 9498020 kB' 'Buffers: 2436 kB' 'Cached: 2174816 kB' 'SwapCached: 0 kB' 'Active: 850232 kB' 'Inactive: 1449556 kB' 'Active(anon): 133000 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 124080 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 64716 kB' 'Slab: 137696 kB' 'SReclaimable: 64716 kB' 'SUnreclaim: 72980 kB' 'KernelStack: 6388 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.239 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.239 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.240 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.240 09:40:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:29.240 09:40:52 -- setup/common.sh@33 -- # echo 0 00:12:29.240 09:40:52 -- setup/common.sh@33 -- # return 0 00:12:29.240 09:40:52 -- setup/hugepages.sh@97 -- # anon=0 00:12:29.240 09:40:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:29.240 09:40:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:29.240 09:40:52 -- setup/common.sh@18 -- # local node= 00:12:29.240 09:40:52 -- setup/common.sh@19 -- # local var val 00:12:29.241 09:40:52 -- setup/common.sh@20 -- # local mem_f mem 00:12:29.241 09:40:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:29.241 09:40:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:29.241 09:40:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:29.241 09:40:52 -- setup/common.sh@28 -- # mapfile -t mem 00:12:29.241 09:40:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7536116 kB' 'MemAvailable: 9498324 kB' 'Buffers: 2436 kB' 'Cached: 2174816 kB' 'SwapCached: 0 kB' 'Active: 850032 kB' 'Inactive: 1449556 kB' 'Active(anon): 132800 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 64716 kB' 'Slab: 137696 kB' 'SReclaimable: 64716 kB' 'SUnreclaim: 72980 kB' 'KernelStack: 6388 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.241 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.241 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.242 09:40:52 -- setup/common.sh@33 -- # echo 0 00:12:29.242 09:40:52 -- setup/common.sh@33 -- # return 0 00:12:29.242 09:40:52 -- setup/hugepages.sh@99 -- # surp=0 00:12:29.242 09:40:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:29.242 09:40:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:29.242 09:40:52 -- setup/common.sh@18 -- # local node= 00:12:29.242 09:40:52 -- setup/common.sh@19 -- # local var val 00:12:29.242 09:40:52 -- setup/common.sh@20 -- # local mem_f mem 00:12:29.242 09:40:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:29.242 09:40:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:29.242 09:40:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:29.242 09:40:52 -- setup/common.sh@28 -- # mapfile -t mem 00:12:29.242 09:40:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7536116 kB' 'MemAvailable: 9498324 kB' 'Buffers: 2436 kB' 'Cached: 2174816 kB' 'SwapCached: 0 kB' 'Active: 850032 kB' 'Inactive: 1449556 kB' 'Active(anon): 132800 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 123896 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 64716 kB' 'Slab: 137696 kB' 'SReclaimable: 64716 kB' 'SUnreclaim: 72980 kB' 'KernelStack: 6388 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.242 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.242 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.243 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.243 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:29.244 09:40:52 -- setup/common.sh@33 -- # echo 0 00:12:29.244 09:40:52 -- setup/common.sh@33 -- # return 0 00:12:29.244 09:40:52 -- setup/hugepages.sh@100 -- # resv=0 00:12:29.244 nr_hugepages=1024 00:12:29.244 09:40:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:29.244 resv_hugepages=0 00:12:29.244 09:40:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:29.244 surplus_hugepages=0 00:12:29.244 09:40:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:29.244 anon_hugepages=0 00:12:29.244 09:40:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:29.244 09:40:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:29.244 09:40:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:29.244 09:40:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:29.244 09:40:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:29.244 09:40:52 -- setup/common.sh@18 -- # local node= 00:12:29.244 09:40:52 -- setup/common.sh@19 -- # local var val 00:12:29.244 09:40:52 -- setup/common.sh@20 -- # local mem_f mem 00:12:29.244 09:40:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:29.244 09:40:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:29.244 09:40:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:29.244 09:40:52 -- setup/common.sh@28 -- # mapfile -t mem 00:12:29.244 09:40:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7537684 kB' 'MemAvailable: 9499896 kB' 'Buffers: 2436 kB' 'Cached: 2174820 kB' 'SwapCached: 0 kB' 'Active: 849940 kB' 'Inactive: 1449560 kB' 'Active(anon): 132708 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 820 kB' 'Writeback: 0 kB' 'AnonPages: 123884 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64716 kB' 'Slab: 137684 kB' 'SReclaimable: 64716 kB' 'SUnreclaim: 72968 kB' 'KernelStack: 6368 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.244 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.244 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.245 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.245 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:29.245 09:40:52 -- setup/common.sh@33 -- # echo 1024 00:12:29.245 09:40:52 -- setup/common.sh@33 -- # return 0 00:12:29.245 09:40:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:29.245 09:40:52 -- setup/hugepages.sh@112 -- # get_nodes 00:12:29.245 09:40:52 -- setup/hugepages.sh@27 -- # local node 00:12:29.245 09:40:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:29.245 09:40:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:29.246 09:40:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:29.246 09:40:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:29.246 09:40:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:29.246 09:40:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:29.246 09:40:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:29.246 09:40:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:29.246 09:40:52 -- setup/common.sh@18 -- # local node=0 00:12:29.246 09:40:52 -- setup/common.sh@19 -- # local var val 00:12:29.246 09:40:52 -- setup/common.sh@20 -- # local mem_f mem 00:12:29.246 09:40:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:29.246 09:40:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:29.246 09:40:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:29.246 09:40:52 -- setup/common.sh@28 -- # mapfile -t mem 00:12:29.246 09:40:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7537684 kB' 'MemUsed: 4704288 kB' 'SwapCached: 0 kB' 'Active: 850204 kB' 'Inactive: 1449560 kB' 'Active(anon): 132972 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 820 kB' 'Writeback: 0 kB' 'FilePages: 2177256 kB' 'Mapped: 48712 kB' 'AnonPages: 124144 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64716 kB' 'Slab: 137684 kB' 'SReclaimable: 64716 kB' 'SUnreclaim: 72968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.246 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.246 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # continue 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # IFS=': ' 00:12:29.247 09:40:52 -- setup/common.sh@31 -- # read -r var val _ 00:12:29.247 09:40:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:29.506 09:40:52 -- setup/common.sh@33 -- # echo 0 00:12:29.506 09:40:52 -- setup/common.sh@33 -- # return 0 00:12:29.506 09:40:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:29.506 09:40:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:29.506 09:40:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:29.506 09:40:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:29.506 node0=1024 expecting 1024 00:12:29.506 09:40:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:29.506 09:40:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:29.506 00:12:29.506 real 0m1.026s 00:12:29.506 user 0m0.447s 00:12:29.506 sys 0m0.523s 00:12:29.506 09:40:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:29.506 09:40:52 -- common/autotest_common.sh@10 -- # set +x 00:12:29.506 ************************************ 00:12:29.506 END TEST default_setup 00:12:29.506 ************************************ 00:12:29.506 09:40:52 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:12:29.506 09:40:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:29.506 09:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.506 09:40:52 -- common/autotest_common.sh@10 -- # set +x 00:12:29.506 ************************************ 00:12:29.506 START TEST per_node_1G_alloc 00:12:29.506 ************************************ 00:12:29.506 09:40:52 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:12:29.506 09:40:52 -- setup/hugepages.sh@143 -- # local IFS=, 00:12:29.506 09:40:52 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:12:29.506 09:40:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:12:29.506 09:40:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:29.506 09:40:52 -- setup/hugepages.sh@51 -- # shift 00:12:29.506 09:40:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:29.506 09:40:52 -- setup/hugepages.sh@52 -- # local node_ids 00:12:29.506 09:40:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:29.506 09:40:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:29.506 09:40:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:29.506 09:40:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:29.506 09:40:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:29.506 09:40:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:29.506 09:40:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:29.506 09:40:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:29.506 09:40:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:29.506 09:40:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:29.506 09:40:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:29.506 09:40:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:12:29.506 09:40:52 -- setup/hugepages.sh@73 -- # return 0 00:12:29.506 09:40:52 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:12:29.506 09:40:52 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:12:29.506 09:40:52 -- setup/hugepages.sh@146 -- # setup output 00:12:29.506 09:40:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:29.506 09:40:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:29.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:29.764 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:29.764 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:30.027 09:40:53 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:12:30.027 09:40:53 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:12:30.027 09:40:53 -- setup/hugepages.sh@89 -- # local node 00:12:30.027 09:40:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:30.027 09:40:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:30.027 09:40:53 -- setup/hugepages.sh@92 -- # local surp 00:12:30.027 09:40:53 -- setup/hugepages.sh@93 -- # local resv 00:12:30.027 09:40:53 -- setup/hugepages.sh@94 -- # local anon 00:12:30.027 09:40:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:30.027 09:40:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:30.027 09:40:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:30.027 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.027 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.027 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.027 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.027 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.027 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.027 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.027 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586516 kB' 'MemAvailable: 10548724 kB' 'Buffers: 2436 kB' 'Cached: 2174820 kB' 'SwapCached: 0 kB' 'Active: 850588 kB' 'Inactive: 1449560 kB' 'Active(anon): 133356 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 124516 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137672 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6340 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.027 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.027 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.028 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.028 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.028 09:40:53 -- setup/hugepages.sh@97 -- # anon=0 00:12:30.028 09:40:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:30.028 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:30.028 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.028 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.028 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.028 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.028 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.028 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.028 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.028 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586516 kB' 'MemAvailable: 10548724 kB' 'Buffers: 2436 kB' 'Cached: 2174820 kB' 'SwapCached: 0 kB' 'Active: 850176 kB' 'Inactive: 1449560 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 124052 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137672 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.028 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.028 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.029 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.029 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.030 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.030 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.030 09:40:53 -- setup/hugepages.sh@99 -- # surp=0 00:12:30.030 09:40:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:30.030 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:30.030 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.030 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.030 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.030 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.030 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.030 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.030 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.030 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586516 kB' 'MemAvailable: 10548724 kB' 'Buffers: 2436 kB' 'Cached: 2174820 kB' 'SwapCached: 0 kB' 'Active: 850184 kB' 'Inactive: 1449560 kB' 'Active(anon): 132952 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 124060 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137672 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.030 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.030 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.031 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.031 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.031 09:40:53 -- setup/hugepages.sh@100 -- # resv=0 00:12:30.031 09:40:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:30.031 nr_hugepages=512 00:12:30.031 09:40:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:30.031 resv_hugepages=0 00:12:30.031 surplus_hugepages=0 00:12:30.031 anon_hugepages=0 00:12:30.031 09:40:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:30.031 09:40:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:30.031 09:40:53 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:30.031 09:40:53 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:30.031 09:40:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:30.031 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:30.031 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.031 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.031 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.031 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.031 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.031 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.031 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.031 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586516 kB' 'MemAvailable: 10548724 kB' 'Buffers: 2436 kB' 'Cached: 2174820 kB' 'SwapCached: 0 kB' 'Active: 850168 kB' 'Inactive: 1449560 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 124044 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137672 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.031 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.031 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.032 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.032 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.033 09:40:53 -- setup/common.sh@33 -- # echo 512 00:12:30.033 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.033 09:40:53 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:30.033 09:40:53 -- setup/hugepages.sh@112 -- # get_nodes 00:12:30.033 09:40:53 -- setup/hugepages.sh@27 -- # local node 00:12:30.033 09:40:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:30.033 09:40:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:30.033 09:40:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:30.033 09:40:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:30.033 09:40:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:30.033 09:40:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:30.033 09:40:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:30.033 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:30.033 09:40:53 -- setup/common.sh@18 -- # local node=0 00:12:30.033 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.033 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.033 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.033 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:30.033 09:40:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:30.033 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.033 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586516 kB' 'MemUsed: 3655456 kB' 'SwapCached: 0 kB' 'Active: 850192 kB' 'Inactive: 1449560 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'FilePages: 2177256 kB' 'Mapped: 48724 kB' 'AnonPages: 124068 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64708 kB' 'Slab: 137672 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 72964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.033 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.033 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.034 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.034 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.034 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.034 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.034 09:40:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:30.034 09:40:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:30.034 09:40:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:30.034 node0=512 expecting 512 00:12:30.034 ************************************ 00:12:30.034 END TEST per_node_1G_alloc 00:12:30.034 ************************************ 00:12:30.034 09:40:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:30.034 09:40:53 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:30.034 09:40:53 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:30.034 00:12:30.034 real 0m0.595s 00:12:30.034 user 0m0.273s 00:12:30.034 sys 0m0.317s 00:12:30.034 09:40:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.034 09:40:53 -- common/autotest_common.sh@10 -- # set +x 00:12:30.034 09:40:53 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:12:30.034 09:40:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:30.034 09:40:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.034 09:40:53 -- common/autotest_common.sh@10 -- # set +x 00:12:30.293 ************************************ 00:12:30.293 START TEST even_2G_alloc 00:12:30.293 ************************************ 00:12:30.293 09:40:53 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:12:30.293 09:40:53 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:12:30.293 09:40:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:12:30.293 09:40:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:30.293 09:40:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:30.293 09:40:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:30.293 09:40:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:30.293 09:40:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:30.293 09:40:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:30.293 09:40:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:30.293 09:40:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:30.293 09:40:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:30.293 09:40:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:30.293 09:40:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:30.293 09:40:53 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:30.293 09:40:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:30.293 09:40:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:12:30.293 09:40:53 -- setup/hugepages.sh@83 -- # : 0 00:12:30.293 09:40:53 -- setup/hugepages.sh@84 -- # : 0 00:12:30.293 09:40:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:30.293 09:40:53 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:12:30.293 09:40:53 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:12:30.293 09:40:53 -- setup/hugepages.sh@153 -- # setup output 00:12:30.293 09:40:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:30.293 09:40:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:30.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:30.554 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:30.554 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:30.554 09:40:53 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:12:30.554 09:40:53 -- setup/hugepages.sh@89 -- # local node 00:12:30.554 09:40:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:30.554 09:40:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:30.554 09:40:53 -- setup/hugepages.sh@92 -- # local surp 00:12:30.554 09:40:53 -- setup/hugepages.sh@93 -- # local resv 00:12:30.554 09:40:53 -- setup/hugepages.sh@94 -- # local anon 00:12:30.554 09:40:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:30.554 09:40:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:30.554 09:40:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:30.554 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.554 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.554 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.554 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.554 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.554 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.554 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.554 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7543008 kB' 'MemAvailable: 9505252 kB' 'Buffers: 2436 kB' 'Cached: 2174856 kB' 'SwapCached: 0 kB' 'Active: 850212 kB' 'Inactive: 1449596 kB' 'Active(anon): 132980 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 124088 kB' 'Mapped: 49032 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137796 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 73088 kB' 'KernelStack: 6472 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.554 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.554 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:30.555 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.555 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.555 09:40:53 -- setup/hugepages.sh@97 -- # anon=0 00:12:30.555 09:40:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:30.555 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:30.555 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.555 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.555 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.555 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.555 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.555 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.555 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.555 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7543008 kB' 'MemAvailable: 9505256 kB' 'Buffers: 2436 kB' 'Cached: 2174860 kB' 'SwapCached: 0 kB' 'Active: 849952 kB' 'Inactive: 1449600 kB' 'Active(anon): 132720 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 124124 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137788 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 73080 kB' 'KernelStack: 6368 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.555 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.555 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.823 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.823 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.823 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.823 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.823 09:40:53 -- setup/hugepages.sh@99 -- # surp=0 00:12:30.823 09:40:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:30.823 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:30.823 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.824 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.824 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.824 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.824 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.824 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.824 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.824 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7543008 kB' 'MemAvailable: 9505256 kB' 'Buffers: 2436 kB' 'Cached: 2174860 kB' 'SwapCached: 0 kB' 'Active: 849908 kB' 'Inactive: 1449600 kB' 'Active(anon): 132676 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137788 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 73080 kB' 'KernelStack: 6368 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.824 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.824 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:30.825 09:40:53 -- setup/common.sh@33 -- # echo 0 00:12:30.825 09:40:53 -- setup/common.sh@33 -- # return 0 00:12:30.825 09:40:53 -- setup/hugepages.sh@100 -- # resv=0 00:12:30.825 09:40:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:30.825 nr_hugepages=1024 00:12:30.825 09:40:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:30.825 resv_hugepages=0 00:12:30.825 09:40:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:30.825 surplus_hugepages=0 00:12:30.825 anon_hugepages=0 00:12:30.825 09:40:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:30.825 09:40:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:30.825 09:40:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:30.825 09:40:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:30.825 09:40:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:30.825 09:40:53 -- setup/common.sh@18 -- # local node= 00:12:30.825 09:40:53 -- setup/common.sh@19 -- # local var val 00:12:30.825 09:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.825 09:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.825 09:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:30.825 09:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:30.825 09:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.825 09:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542756 kB' 'MemAvailable: 9505004 kB' 'Buffers: 2436 kB' 'Cached: 2174860 kB' 'SwapCached: 0 kB' 'Active: 849844 kB' 'Inactive: 1449600 kB' 'Active(anon): 132612 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 124028 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 64708 kB' 'Slab: 137788 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 73080 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:53 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.825 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.825 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:30.826 09:40:54 -- setup/common.sh@33 -- # echo 1024 00:12:30.826 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:30.826 09:40:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:30.826 09:40:54 -- setup/hugepages.sh@112 -- # get_nodes 00:12:30.826 09:40:54 -- setup/hugepages.sh@27 -- # local node 00:12:30.826 09:40:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:30.826 09:40:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:30.826 09:40:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:30.826 09:40:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:30.826 09:40:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:30.826 09:40:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:30.826 09:40:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:30.826 09:40:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:30.826 09:40:54 -- setup/common.sh@18 -- # local node=0 00:12:30.826 09:40:54 -- setup/common.sh@19 -- # local var val 00:12:30.826 09:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:12:30.826 09:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:30.826 09:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:30.826 09:40:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:30.826 09:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:12:30.826 09:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542756 kB' 'MemUsed: 4699216 kB' 'SwapCached: 0 kB' 'Active: 849852 kB' 'Inactive: 1449600 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'FilePages: 2177296 kB' 'Mapped: 48736 kB' 'AnonPages: 124028 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64708 kB' 'Slab: 137788 kB' 'SReclaimable: 64708 kB' 'SUnreclaim: 73080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.826 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.826 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # continue 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:30.827 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:30.827 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:30.827 09:40:54 -- setup/common.sh@33 -- # echo 0 00:12:30.827 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:30.827 09:40:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:30.827 09:40:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:30.827 09:40:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:30.827 node0=1024 expecting 1024 00:12:30.827 ************************************ 00:12:30.827 END TEST even_2G_alloc 00:12:30.827 ************************************ 00:12:30.827 09:40:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:30.827 09:40:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:30.827 09:40:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:30.827 00:12:30.827 real 0m0.578s 00:12:30.827 user 0m0.264s 00:12:30.827 sys 0m0.316s 00:12:30.827 09:40:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.827 09:40:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 09:40:54 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:12:30.827 09:40:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:30.827 09:40:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.827 09:40:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 ************************************ 00:12:30.827 START TEST odd_alloc 00:12:30.827 ************************************ 00:12:30.827 09:40:54 -- common/autotest_common.sh@1111 -- # odd_alloc 00:12:30.827 09:40:54 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:12:30.827 09:40:54 -- setup/hugepages.sh@49 -- # local size=2098176 00:12:30.827 09:40:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:30.827 09:40:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:30.827 09:40:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:12:30.827 09:40:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:30.827 09:40:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:30.827 09:40:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:30.827 09:40:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:12:30.827 09:40:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:30.827 09:40:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:30.827 09:40:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:30.828 09:40:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:30.828 09:40:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:30.828 09:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:30.828 09:40:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:12:30.828 09:40:54 -- setup/hugepages.sh@83 -- # : 0 00:12:30.828 09:40:54 -- setup/hugepages.sh@84 -- # : 0 00:12:30.828 09:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:30.828 09:40:54 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:12:30.828 09:40:54 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:12:30.828 09:40:54 -- setup/hugepages.sh@160 -- # setup output 00:12:30.828 09:40:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:30.828 09:40:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:31.402 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:31.402 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:31.402 09:40:54 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:12:31.402 09:40:54 -- setup/hugepages.sh@89 -- # local node 00:12:31.402 09:40:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:31.402 09:40:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:31.402 09:40:54 -- setup/hugepages.sh@92 -- # local surp 00:12:31.402 09:40:54 -- setup/hugepages.sh@93 -- # local resv 00:12:31.402 09:40:54 -- setup/hugepages.sh@94 -- # local anon 00:12:31.402 09:40:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:31.402 09:40:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:31.402 09:40:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:31.402 09:40:54 -- setup/common.sh@18 -- # local node= 00:12:31.402 09:40:54 -- setup/common.sh@19 -- # local var val 00:12:31.402 09:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:12:31.402 09:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:31.402 09:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:31.402 09:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:31.402 09:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:12:31.402 09:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.402 09:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7540232 kB' 'MemAvailable: 9502500 kB' 'Buffers: 2436 kB' 'Cached: 2174864 kB' 'SwapCached: 0 kB' 'Active: 850668 kB' 'Inactive: 1449604 kB' 'Active(anon): 133436 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124320 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137832 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73092 kB' 'KernelStack: 6340 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.402 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.402 09:40:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.403 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.403 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.404 09:40:54 -- setup/common.sh@33 -- # echo 0 00:12:31.404 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:31.404 09:40:54 -- setup/hugepages.sh@97 -- # anon=0 00:12:31.404 09:40:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:31.404 09:40:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:31.404 09:40:54 -- setup/common.sh@18 -- # local node= 00:12:31.404 09:40:54 -- setup/common.sh@19 -- # local var val 00:12:31.404 09:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:12:31.404 09:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:31.404 09:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:31.404 09:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:31.404 09:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:12:31.404 09:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7540232 kB' 'MemAvailable: 9502500 kB' 'Buffers: 2436 kB' 'Cached: 2174864 kB' 'SwapCached: 0 kB' 'Active: 850220 kB' 'Inactive: 1449604 kB' 'Active(anon): 132988 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137832 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73092 kB' 'KernelStack: 6368 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.404 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.404 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.405 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.405 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.406 09:40:54 -- setup/common.sh@33 -- # echo 0 00:12:31.406 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:31.406 09:40:54 -- setup/hugepages.sh@99 -- # surp=0 00:12:31.406 09:40:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:31.406 09:40:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:31.406 09:40:54 -- setup/common.sh@18 -- # local node= 00:12:31.406 09:40:54 -- setup/common.sh@19 -- # local var val 00:12:31.406 09:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:12:31.406 09:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:31.406 09:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:31.406 09:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:31.406 09:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:12:31.406 09:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7540232 kB' 'MemAvailable: 9502500 kB' 'Buffers: 2436 kB' 'Cached: 2174864 kB' 'SwapCached: 0 kB' 'Active: 850168 kB' 'Inactive: 1449604 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124064 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137832 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73092 kB' 'KernelStack: 6352 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.406 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.406 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.407 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.407 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.408 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:31.408 09:40:54 -- setup/common.sh@33 -- # echo 0 00:12:31.408 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:31.408 nr_hugepages=1025 00:12:31.408 resv_hugepages=0 00:12:31.408 09:40:54 -- setup/hugepages.sh@100 -- # resv=0 00:12:31.408 09:40:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:12:31.408 09:40:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:31.408 surplus_hugepages=0 00:12:31.408 09:40:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:31.408 anon_hugepages=0 00:12:31.408 09:40:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:31.408 09:40:54 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:31.408 09:40:54 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:12:31.408 09:40:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:31.408 09:40:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:31.408 09:40:54 -- setup/common.sh@18 -- # local node= 00:12:31.408 09:40:54 -- setup/common.sh@19 -- # local var val 00:12:31.408 09:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:12:31.408 09:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:31.408 09:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:31.408 09:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:31.408 09:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:12:31.408 09:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:31.408 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7540232 kB' 'MemAvailable: 9502500 kB' 'Buffers: 2436 kB' 'Cached: 2174864 kB' 'SwapCached: 0 kB' 'Active: 850020 kB' 'Inactive: 1449604 kB' 'Active(anon): 132788 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124012 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137832 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73092 kB' 'KernelStack: 6368 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.409 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.409 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.410 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.410 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:31.411 09:40:54 -- setup/common.sh@33 -- # echo 1025 00:12:31.411 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:31.411 09:40:54 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:31.411 09:40:54 -- setup/hugepages.sh@112 -- # get_nodes 00:12:31.411 09:40:54 -- setup/hugepages.sh@27 -- # local node 00:12:31.411 09:40:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:31.411 09:40:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:12:31.411 09:40:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:31.411 09:40:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:31.411 09:40:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:31.411 09:40:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:31.411 09:40:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:31.411 09:40:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:31.411 09:40:54 -- setup/common.sh@18 -- # local node=0 00:12:31.411 09:40:54 -- setup/common.sh@19 -- # local var val 00:12:31.411 09:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:12:31.411 09:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:31.411 09:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:31.411 09:40:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:31.411 09:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:12:31.411 09:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7540232 kB' 'MemUsed: 4701740 kB' 'SwapCached: 0 kB' 'Active: 850276 kB' 'Inactive: 1449604 kB' 'Active(anon): 133044 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'FilePages: 2177300 kB' 'Mapped: 48748 kB' 'AnonPages: 124168 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64740 kB' 'Slab: 137832 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.411 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.411 09:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # continue 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.412 09:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.412 09:40:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:31.412 09:40:54 -- setup/common.sh@33 -- # echo 0 00:12:31.412 09:40:54 -- setup/common.sh@33 -- # return 0 00:12:31.412 09:40:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:31.412 09:40:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:31.412 09:40:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:31.412 node0=1025 expecting 1025 00:12:31.412 ************************************ 00:12:31.412 END TEST odd_alloc 00:12:31.412 ************************************ 00:12:31.412 09:40:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:31.412 09:40:54 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:12:31.412 09:40:54 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:12:31.412 00:12:31.412 real 0m0.577s 00:12:31.412 user 0m0.299s 00:12:31.412 sys 0m0.295s 00:12:31.412 09:40:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.412 09:40:54 -- common/autotest_common.sh@10 -- # set +x 00:12:31.672 09:40:54 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:12:31.672 09:40:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:31.672 09:40:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.672 09:40:54 -- common/autotest_common.sh@10 -- # set +x 00:12:31.672 ************************************ 00:12:31.672 START TEST custom_alloc 00:12:31.672 ************************************ 00:12:31.672 09:40:54 -- common/autotest_common.sh@1111 -- # custom_alloc 00:12:31.672 09:40:54 -- setup/hugepages.sh@167 -- # local IFS=, 00:12:31.672 09:40:54 -- setup/hugepages.sh@169 -- # local node 00:12:31.672 09:40:54 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:12:31.672 09:40:54 -- setup/hugepages.sh@170 -- # local nodes_hp 00:12:31.672 09:40:54 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:12:31.672 09:40:54 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:12:31.672 09:40:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:12:31.672 09:40:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:31.672 09:40:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:31.672 09:40:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:31.672 09:40:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:31.672 09:40:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:31.672 09:40:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:31.672 09:40:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:31.672 09:40:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:31.672 09:40:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:12:31.672 09:40:54 -- setup/hugepages.sh@83 -- # : 0 00:12:31.672 09:40:54 -- setup/hugepages.sh@84 -- # : 0 00:12:31.672 09:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:12:31.672 09:40:54 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:12:31.672 09:40:54 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:12:31.672 09:40:54 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:12:31.672 09:40:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:31.672 09:40:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:31.672 09:40:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:31.672 09:40:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:31.672 09:40:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:31.672 09:40:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:31.672 09:40:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:12:31.672 09:40:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:12:31.672 09:40:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:12:31.672 09:40:54 -- setup/hugepages.sh@78 -- # return 0 00:12:31.672 09:40:54 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:12:31.672 09:40:54 -- setup/hugepages.sh@187 -- # setup output 00:12:31.672 09:40:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:31.672 09:40:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:31.932 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:31.932 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:31.932 09:40:55 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:12:31.932 09:40:55 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:12:31.932 09:40:55 -- setup/hugepages.sh@89 -- # local node 00:12:31.932 09:40:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:31.932 09:40:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:31.932 09:40:55 -- setup/hugepages.sh@92 -- # local surp 00:12:31.932 09:40:55 -- setup/hugepages.sh@93 -- # local resv 00:12:31.932 09:40:55 -- setup/hugepages.sh@94 -- # local anon 00:12:31.932 09:40:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:31.932 09:40:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:31.932 09:40:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:31.932 09:40:55 -- setup/common.sh@18 -- # local node= 00:12:31.932 09:40:55 -- setup/common.sh@19 -- # local var val 00:12:31.932 09:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:12:31.932 09:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:31.932 09:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:31.932 09:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:31.932 09:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:12:31.932 09:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586488 kB' 'MemAvailable: 10548760 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 850464 kB' 'Inactive: 1449608 kB' 'Active(anon): 133232 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124644 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137848 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73108 kB' 'KernelStack: 6356 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:31.932 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:31.932 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.196 09:40:55 -- setup/common.sh@33 -- # echo 0 00:12:32.196 09:40:55 -- setup/common.sh@33 -- # return 0 00:12:32.196 09:40:55 -- setup/hugepages.sh@97 -- # anon=0 00:12:32.196 09:40:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:32.196 09:40:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:32.196 09:40:55 -- setup/common.sh@18 -- # local node= 00:12:32.196 09:40:55 -- setup/common.sh@19 -- # local var val 00:12:32.196 09:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.196 09:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.196 09:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.196 09:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.196 09:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.196 09:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586236 kB' 'MemAvailable: 10548508 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 850268 kB' 'Inactive: 1449608 kB' 'Active(anon): 133036 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124172 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137848 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73108 kB' 'KernelStack: 6368 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.196 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.196 09:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.197 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.197 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.198 09:40:55 -- setup/common.sh@33 -- # echo 0 00:12:32.198 09:40:55 -- setup/common.sh@33 -- # return 0 00:12:32.198 09:40:55 -- setup/hugepages.sh@99 -- # surp=0 00:12:32.198 09:40:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:32.198 09:40:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:32.198 09:40:55 -- setup/common.sh@18 -- # local node= 00:12:32.198 09:40:55 -- setup/common.sh@19 -- # local var val 00:12:32.198 09:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.198 09:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.198 09:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.198 09:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.198 09:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.198 09:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586236 kB' 'MemAvailable: 10548508 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 850036 kB' 'Inactive: 1449608 kB' 'Active(anon): 132804 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137840 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73100 kB' 'KernelStack: 6368 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.198 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.198 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.199 09:40:55 -- setup/common.sh@33 -- # echo 0 00:12:32.199 09:40:55 -- setup/common.sh@33 -- # return 0 00:12:32.199 09:40:55 -- setup/hugepages.sh@100 -- # resv=0 00:12:32.199 09:40:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:32.199 nr_hugepages=512 00:12:32.199 09:40:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:32.199 resv_hugepages=0 00:12:32.199 09:40:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:32.199 surplus_hugepages=0 00:12:32.199 09:40:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:32.199 anon_hugepages=0 00:12:32.199 09:40:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:32.199 09:40:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:32.199 09:40:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:32.199 09:40:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:32.199 09:40:55 -- setup/common.sh@18 -- # local node= 00:12:32.199 09:40:55 -- setup/common.sh@19 -- # local var val 00:12:32.199 09:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.199 09:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.199 09:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.199 09:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.199 09:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.199 09:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586236 kB' 'MemAvailable: 10548508 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 849948 kB' 'Inactive: 1449608 kB' 'Active(anon): 132716 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 64740 kB' 'Slab: 137840 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73100 kB' 'KernelStack: 6352 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.199 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.199 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.200 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.200 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.201 09:40:55 -- setup/common.sh@33 -- # echo 512 00:12:32.201 09:40:55 -- setup/common.sh@33 -- # return 0 00:12:32.201 09:40:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:32.201 09:40:55 -- setup/hugepages.sh@112 -- # get_nodes 00:12:32.201 09:40:55 -- setup/hugepages.sh@27 -- # local node 00:12:32.201 09:40:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:32.201 09:40:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:32.201 09:40:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:32.201 09:40:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:32.201 09:40:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:32.201 09:40:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:32.201 09:40:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:32.201 09:40:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:32.201 09:40:55 -- setup/common.sh@18 -- # local node=0 00:12:32.201 09:40:55 -- setup/common.sh@19 -- # local var val 00:12:32.201 09:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.201 09:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.201 09:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:32.201 09:40:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:32.201 09:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.201 09:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8586236 kB' 'MemUsed: 3655736 kB' 'SwapCached: 0 kB' 'Active: 849956 kB' 'Inactive: 1449608 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'FilePages: 2177304 kB' 'Mapped: 48756 kB' 'AnonPages: 124092 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64740 kB' 'Slab: 137844 kB' 'SReclaimable: 64740 kB' 'SUnreclaim: 73104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.201 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.201 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.202 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.202 09:40:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.202 09:40:55 -- setup/common.sh@33 -- # echo 0 00:12:32.202 09:40:55 -- setup/common.sh@33 -- # return 0 00:12:32.202 09:40:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:32.202 09:40:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:32.202 09:40:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:32.202 09:40:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:32.202 09:40:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:32.202 node0=512 expecting 512 00:12:32.202 09:40:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:32.202 00:12:32.202 real 0m0.592s 00:12:32.202 user 0m0.296s 00:12:32.202 sys 0m0.287s 00:12:32.202 ************************************ 00:12:32.202 END TEST custom_alloc 00:12:32.202 ************************************ 00:12:32.202 09:40:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:32.202 09:40:55 -- common/autotest_common.sh@10 -- # set +x 00:12:32.202 09:40:55 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:12:32.202 09:40:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:32.202 09:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.202 09:40:55 -- common/autotest_common.sh@10 -- # set +x 00:12:32.464 ************************************ 00:12:32.464 START TEST no_shrink_alloc 00:12:32.464 ************************************ 00:12:32.464 09:40:55 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:12:32.464 09:40:55 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:12:32.464 09:40:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:12:32.464 09:40:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:32.464 09:40:55 -- setup/hugepages.sh@51 -- # shift 00:12:32.464 09:40:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:32.464 09:40:55 -- setup/hugepages.sh@52 -- # local node_ids 00:12:32.464 09:40:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:32.464 09:40:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:32.464 09:40:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:32.464 09:40:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:32.464 09:40:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:32.464 09:40:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:32.464 09:40:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:32.464 09:40:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:32.464 09:40:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:32.464 09:40:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:32.464 09:40:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:32.464 09:40:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:32.464 09:40:55 -- setup/hugepages.sh@73 -- # return 0 00:12:32.464 09:40:55 -- setup/hugepages.sh@198 -- # setup output 00:12:32.464 09:40:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:32.464 09:40:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:32.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.729 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.729 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.729 09:40:55 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:12:32.729 09:40:55 -- setup/hugepages.sh@89 -- # local node 00:12:32.729 09:40:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:32.729 09:40:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:32.729 09:40:55 -- setup/hugepages.sh@92 -- # local surp 00:12:32.729 09:40:55 -- setup/hugepages.sh@93 -- # local resv 00:12:32.729 09:40:55 -- setup/hugepages.sh@94 -- # local anon 00:12:32.729 09:40:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:32.729 09:40:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:32.729 09:40:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:32.729 09:40:55 -- setup/common.sh@18 -- # local node= 00:12:32.729 09:40:55 -- setup/common.sh@19 -- # local var val 00:12:32.729 09:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.729 09:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.729 09:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.729 09:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.729 09:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.730 09:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542200 kB' 'MemAvailable: 9504468 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 846052 kB' 'Inactive: 1449608 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 48316 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137772 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73036 kB' 'KernelStack: 6336 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.730 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.730 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:32.731 09:40:55 -- setup/common.sh@33 -- # echo 0 00:12:32.731 09:40:55 -- setup/common.sh@33 -- # return 0 00:12:32.731 09:40:55 -- setup/hugepages.sh@97 -- # anon=0 00:12:32.731 09:40:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:32.731 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:32.731 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:32.731 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:32.731 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.731 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.731 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.731 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.731 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.731 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542452 kB' 'MemAvailable: 9504720 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 845700 kB' 'Inactive: 1449608 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 119632 kB' 'Mapped: 48084 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137764 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73028 kB' 'KernelStack: 6300 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.731 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.731 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:32.732 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:32.732 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:32.732 09:40:56 -- setup/hugepages.sh@99 -- # surp=0 00:12:32.732 09:40:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:32.732 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:32.732 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:32.732 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:32.732 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.732 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.732 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.732 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.732 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.732 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542836 kB' 'MemAvailable: 9505104 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 845712 kB' 'Inactive: 1449608 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 48084 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137764 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73028 kB' 'KernelStack: 6300 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.732 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.732 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.733 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.733 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:32.733 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:32.733 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:32.733 nr_hugepages=1024 00:12:32.733 resv_hugepages=0 00:12:32.733 surplus_hugepages=0 00:12:32.733 anon_hugepages=0 00:12:32.733 09:40:56 -- setup/hugepages.sh@100 -- # resv=0 00:12:32.733 09:40:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:32.733 09:40:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:32.733 09:40:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:32.734 09:40:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:32.734 09:40:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:32.734 09:40:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:32.734 09:40:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:32.734 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:32.734 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:32.734 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:32.734 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:32.734 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:32.734 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:32.734 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:32.734 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:32.734 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542836 kB' 'MemAvailable: 9505104 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 845720 kB' 'Inactive: 1449608 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 48084 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137756 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73020 kB' 'KernelStack: 6316 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.734 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.734 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # continue 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:32.735 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:32.735 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:32.735 09:40:56 -- setup/common.sh@33 -- # echo 1024 00:12:32.735 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:32.735 09:40:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:32.735 09:40:56 -- setup/hugepages.sh@112 -- # get_nodes 00:12:32.735 09:40:56 -- setup/hugepages.sh@27 -- # local node 00:12:32.735 09:40:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:32.735 09:40:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:32.735 09:40:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:32.735 09:40:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:32.735 09:40:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:32.735 09:40:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:33.002 09:40:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:33.002 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:33.002 09:40:56 -- setup/common.sh@18 -- # local node=0 00:12:33.002 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:33.002 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:33.002 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:33.002 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:33.002 09:40:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:33.002 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:33.002 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542836 kB' 'MemUsed: 4699136 kB' 'SwapCached: 0 kB' 'Active: 845712 kB' 'Inactive: 1449608 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'FilePages: 2177304 kB' 'Mapped: 48084 kB' 'AnonPages: 119636 kB' 'Shmem: 10464 kB' 'KernelStack: 6300 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64736 kB' 'Slab: 137756 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.002 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.002 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.003 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.003 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.003 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:33.003 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:33.003 09:40:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:33.003 09:40:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:33.003 node0=1024 expecting 1024 00:12:33.003 09:40:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:33.003 09:40:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:33.003 09:40:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:33.003 09:40:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:33.003 09:40:56 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:12:33.003 09:40:56 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:12:33.003 09:40:56 -- setup/hugepages.sh@202 -- # setup output 00:12:33.003 09:40:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:33.003 09:40:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:33.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:33.274 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:33.274 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:33.274 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:12:33.274 09:40:56 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:12:33.274 09:40:56 -- setup/hugepages.sh@89 -- # local node 00:12:33.274 09:40:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:33.274 09:40:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:33.274 09:40:56 -- setup/hugepages.sh@92 -- # local surp 00:12:33.274 09:40:56 -- setup/hugepages.sh@93 -- # local resv 00:12:33.274 09:40:56 -- setup/hugepages.sh@94 -- # local anon 00:12:33.274 09:40:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:33.274 09:40:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:33.274 09:40:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:33.274 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:33.274 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:33.274 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:33.274 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:33.274 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:33.274 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:33.274 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:33.274 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7537984 kB' 'MemAvailable: 9500248 kB' 'Buffers: 2436 kB' 'Cached: 2174864 kB' 'SwapCached: 0 kB' 'Active: 846640 kB' 'Inactive: 1449604 kB' 'Active(anon): 129408 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120620 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137756 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73020 kB' 'KernelStack: 6356 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.274 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.274 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:33.275 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:33.275 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:33.275 09:40:56 -- setup/hugepages.sh@97 -- # anon=0 00:12:33.275 09:40:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:33.275 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:33.275 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:33.275 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:33.275 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:33.275 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:33.275 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:33.275 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:33.275 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:33.275 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7538000 kB' 'MemAvailable: 9500264 kB' 'Buffers: 2436 kB' 'Cached: 2174864 kB' 'SwapCached: 0 kB' 'Active: 846084 kB' 'Inactive: 1449604 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 119972 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137752 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73016 kB' 'KernelStack: 6308 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.275 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.275 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.276 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.276 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.277 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:33.277 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:33.277 09:40:56 -- setup/hugepages.sh@99 -- # surp=0 00:12:33.277 09:40:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:33.277 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:33.277 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:33.277 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:33.277 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:33.277 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:33.277 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:33.277 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:33.277 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:33.277 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7538216 kB' 'MemAvailable: 9500484 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 845416 kB' 'Inactive: 1449608 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 119356 kB' 'Mapped: 48024 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137744 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6256 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.277 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.277 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:33.278 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:33.278 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:33.278 nr_hugepages=1024 00:12:33.278 resv_hugepages=0 00:12:33.278 surplus_hugepages=0 00:12:33.278 anon_hugepages=0 00:12:33.278 09:40:56 -- setup/hugepages.sh@100 -- # resv=0 00:12:33.278 09:40:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:33.278 09:40:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:33.278 09:40:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:33.278 09:40:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:33.278 09:40:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:33.278 09:40:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:33.278 09:40:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:33.278 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:33.278 09:40:56 -- setup/common.sh@18 -- # local node= 00:12:33.278 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:33.278 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:33.278 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:33.278 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:33.278 09:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:33.278 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:33.278 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7538760 kB' 'MemAvailable: 9501028 kB' 'Buffers: 2436 kB' 'Cached: 2174868 kB' 'SwapCached: 0 kB' 'Active: 845552 kB' 'Inactive: 1449608 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 119444 kB' 'Mapped: 48024 kB' 'Shmem: 10464 kB' 'KReclaimable: 64736 kB' 'Slab: 137744 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6240 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.278 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.278 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.279 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.279 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:33.550 09:40:56 -- setup/common.sh@33 -- # echo 1024 00:12:33.550 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:33.550 09:40:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:33.550 09:40:56 -- setup/hugepages.sh@112 -- # get_nodes 00:12:33.550 09:40:56 -- setup/hugepages.sh@27 -- # local node 00:12:33.550 09:40:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:33.550 09:40:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:33.550 09:40:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:33.550 09:40:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:33.550 09:40:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:33.550 09:40:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:33.550 09:40:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:33.550 09:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:33.550 09:40:56 -- setup/common.sh@18 -- # local node=0 00:12:33.550 09:40:56 -- setup/common.sh@19 -- # local var val 00:12:33.550 09:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:12:33.550 09:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:33.550 09:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:33.550 09:40:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:33.550 09:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:12:33.550 09:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7538760 kB' 'MemUsed: 4703212 kB' 'SwapCached: 0 kB' 'Active: 845556 kB' 'Inactive: 1449608 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 717232 kB' 'Inactive(file): 1449608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2177304 kB' 'Mapped: 48024 kB' 'AnonPages: 119444 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64736 kB' 'Slab: 137744 kB' 'SReclaimable: 64736 kB' 'SUnreclaim: 73008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.550 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.550 09:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # continue 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:12:33.551 09:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:12:33.551 09:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:33.551 09:40:56 -- setup/common.sh@33 -- # echo 0 00:12:33.551 09:40:56 -- setup/common.sh@33 -- # return 0 00:12:33.551 09:40:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:33.551 09:40:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:33.551 09:40:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:33.551 09:40:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:33.551 09:40:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:33.551 node0=1024 expecting 1024 00:12:33.551 09:40:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:33.551 ************************************ 00:12:33.551 END TEST no_shrink_alloc 00:12:33.551 ************************************ 00:12:33.551 00:12:33.551 real 0m1.101s 00:12:33.551 user 0m0.519s 00:12:33.551 sys 0m0.605s 00:12:33.551 09:40:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.551 09:40:56 -- common/autotest_common.sh@10 -- # set +x 00:12:33.551 09:40:56 -- setup/hugepages.sh@217 -- # clear_hp 00:12:33.551 09:40:56 -- setup/hugepages.sh@37 -- # local node hp 00:12:33.551 09:40:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:33.551 09:40:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:33.551 09:40:56 -- setup/hugepages.sh@41 -- # echo 0 00:12:33.551 09:40:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:33.551 09:40:56 -- setup/hugepages.sh@41 -- # echo 0 00:12:33.551 09:40:56 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:33.551 09:40:56 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:33.551 ************************************ 00:12:33.551 END TEST hugepages 00:12:33.551 ************************************ 00:12:33.551 00:12:33.551 real 0m5.388s 00:12:33.551 user 0m2.416s 00:12:33.551 sys 0m2.810s 00:12:33.551 09:40:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.551 09:40:56 -- common/autotest_common.sh@10 -- # set +x 00:12:33.551 09:40:56 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:33.551 09:40:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:33.551 09:40:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.551 09:40:56 -- common/autotest_common.sh@10 -- # set +x 00:12:33.551 ************************************ 00:12:33.551 START TEST driver 00:12:33.551 ************************************ 00:12:33.551 09:40:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:33.823 * Looking for test storage... 00:12:33.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:33.823 09:40:56 -- setup/driver.sh@68 -- # setup reset 00:12:33.823 09:40:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:33.823 09:40:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:34.408 09:40:57 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:34.408 09:40:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:34.408 09:40:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.408 09:40:57 -- common/autotest_common.sh@10 -- # set +x 00:12:34.408 ************************************ 00:12:34.408 START TEST guess_driver 00:12:34.408 ************************************ 00:12:34.408 09:40:57 -- common/autotest_common.sh@1111 -- # guess_driver 00:12:34.408 09:40:57 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:34.408 09:40:57 -- setup/driver.sh@47 -- # local fail=0 00:12:34.408 09:40:57 -- setup/driver.sh@49 -- # pick_driver 00:12:34.408 09:40:57 -- setup/driver.sh@36 -- # vfio 00:12:34.408 09:40:57 -- setup/driver.sh@21 -- # local iommu_grups 00:12:34.408 09:40:57 -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:34.408 09:40:57 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:34.408 09:40:57 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:34.408 09:40:57 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:12:34.408 09:40:57 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:12:34.408 09:40:57 -- setup/driver.sh@32 -- # return 1 00:12:34.408 09:40:57 -- setup/driver.sh@38 -- # uio 00:12:34.408 09:40:57 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:12:34.408 09:40:57 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:12:34.408 09:40:57 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:12:34.408 09:40:57 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:12:34.408 09:40:57 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:12:34.408 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:12:34.408 09:40:57 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:12:34.408 Looking for driver=uio_pci_generic 00:12:34.408 09:40:57 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:12:34.408 09:40:57 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:34.408 09:40:57 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:12:34.408 09:40:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:34.408 09:40:57 -- setup/driver.sh@45 -- # setup output config 00:12:34.408 09:40:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:34.408 09:40:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:34.974 09:40:58 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:12:34.974 09:40:58 -- setup/driver.sh@58 -- # continue 00:12:34.974 09:40:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:34.974 09:40:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:34.974 09:40:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:34.974 09:40:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:35.233 09:40:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:35.233 09:40:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:35.233 09:40:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:35.233 09:40:58 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:35.233 09:40:58 -- setup/driver.sh@65 -- # setup reset 00:12:35.233 09:40:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:35.233 09:40:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:35.799 00:12:35.799 real 0m1.445s 00:12:35.799 user 0m0.546s 00:12:35.799 sys 0m0.909s 00:12:35.799 09:40:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.799 09:40:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.799 ************************************ 00:12:35.799 END TEST guess_driver 00:12:35.799 ************************************ 00:12:35.799 ************************************ 00:12:35.799 END TEST driver 00:12:35.799 ************************************ 00:12:35.799 00:12:35.799 real 0m2.215s 00:12:35.799 user 0m0.802s 00:12:35.799 sys 0m1.457s 00:12:35.799 09:40:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.799 09:40:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.799 09:40:59 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:35.799 09:40:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:35.799 09:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.799 09:40:59 -- common/autotest_common.sh@10 -- # set +x 00:12:36.058 ************************************ 00:12:36.058 START TEST devices 00:12:36.058 ************************************ 00:12:36.058 09:40:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:36.058 * Looking for test storage... 00:12:36.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:36.058 09:40:59 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:36.058 09:40:59 -- setup/devices.sh@192 -- # setup reset 00:12:36.059 09:40:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:36.059 09:40:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:37.020 09:41:00 -- setup/devices.sh@194 -- # get_zoned_devs 00:12:37.020 09:41:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:37.020 09:41:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:37.020 09:41:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:37.020 09:41:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:37.020 09:41:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:37.020 09:41:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:37.020 09:41:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:37.020 09:41:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:12:37.020 09:41:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:12:37.020 09:41:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:37.020 09:41:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:12:37.020 09:41:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:12:37.020 09:41:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:37.020 09:41:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:37.020 09:41:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:37.020 09:41:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:37.020 09:41:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:37.020 09:41:00 -- setup/devices.sh@196 -- # blocks=() 00:12:37.020 09:41:00 -- setup/devices.sh@196 -- # declare -a blocks 00:12:37.020 09:41:00 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:37.020 09:41:00 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:37.020 09:41:00 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:37.020 09:41:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:37.020 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:37.020 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:37.020 09:41:00 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:37.020 09:41:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:37.020 09:41:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:37.020 09:41:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:12:37.020 No valid GPT data, bailing 00:12:37.020 09:41:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:37.020 09:41:00 -- scripts/common.sh@391 -- # pt= 00:12:37.020 09:41:00 -- scripts/common.sh@392 -- # return 1 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:37.020 09:41:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:37.020 09:41:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:37.020 09:41:00 -- setup/common.sh@80 -- # echo 4294967296 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:37.020 09:41:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:37.020 09:41:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:37.020 09:41:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:37.020 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:12:37.020 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:37.020 09:41:00 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:37.020 09:41:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:12:37.020 09:41:00 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:12:37.020 09:41:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:12:37.020 No valid GPT data, bailing 00:12:37.020 09:41:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:12:37.020 09:41:00 -- scripts/common.sh@391 -- # pt= 00:12:37.020 09:41:00 -- scripts/common.sh@392 -- # return 1 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:12:37.020 09:41:00 -- setup/common.sh@76 -- # local dev=nvme0n2 00:12:37.020 09:41:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:12:37.020 09:41:00 -- setup/common.sh@80 -- # echo 4294967296 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:37.020 09:41:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:37.020 09:41:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:37.020 09:41:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:37.020 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:12:37.020 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:37.020 09:41:00 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:37.020 09:41:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:37.020 09:41:00 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:12:37.020 09:41:00 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:12:37.021 09:41:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:12:37.021 No valid GPT data, bailing 00:12:37.021 09:41:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:12:37.021 09:41:00 -- scripts/common.sh@391 -- # pt= 00:12:37.021 09:41:00 -- scripts/common.sh@392 -- # return 1 00:12:37.021 09:41:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:12:37.021 09:41:00 -- setup/common.sh@76 -- # local dev=nvme0n3 00:12:37.021 09:41:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:12:37.021 09:41:00 -- setup/common.sh@80 -- # echo 4294967296 00:12:37.021 09:41:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:37.021 09:41:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:37.021 09:41:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:37.021 09:41:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:37.021 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:12:37.021 09:41:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:12:37.021 09:41:00 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:12:37.021 09:41:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:37.021 09:41:00 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:12:37.021 09:41:00 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:12:37.021 09:41:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:12:37.021 No valid GPT data, bailing 00:12:37.021 09:41:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:37.021 09:41:00 -- scripts/common.sh@391 -- # pt= 00:12:37.021 09:41:00 -- scripts/common.sh@392 -- # return 1 00:12:37.021 09:41:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:12:37.021 09:41:00 -- setup/common.sh@76 -- # local dev=nvme1n1 00:12:37.021 09:41:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:12:37.021 09:41:00 -- setup/common.sh@80 -- # echo 5368709120 00:12:37.021 09:41:00 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:12:37.021 09:41:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:37.021 09:41:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:12:37.021 09:41:00 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:12:37.021 09:41:00 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:37.021 09:41:00 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:37.021 09:41:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:37.021 09:41:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.021 09:41:00 -- common/autotest_common.sh@10 -- # set +x 00:12:37.280 ************************************ 00:12:37.280 START TEST nvme_mount 00:12:37.280 ************************************ 00:12:37.280 09:41:00 -- common/autotest_common.sh@1111 -- # nvme_mount 00:12:37.280 09:41:00 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:37.280 09:41:00 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:37.280 09:41:00 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:37.280 09:41:00 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:37.280 09:41:00 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:37.280 09:41:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:37.280 09:41:00 -- setup/common.sh@40 -- # local part_no=1 00:12:37.280 09:41:00 -- setup/common.sh@41 -- # local size=1073741824 00:12:37.280 09:41:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:37.280 09:41:00 -- setup/common.sh@44 -- # parts=() 00:12:37.280 09:41:00 -- setup/common.sh@44 -- # local parts 00:12:37.280 09:41:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:12:37.280 09:41:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:37.280 09:41:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:37.280 09:41:00 -- setup/common.sh@46 -- # (( part++ )) 00:12:37.280 09:41:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:37.280 09:41:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:37.280 09:41:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:37.280 09:41:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:38.236 Creating new GPT entries in memory. 00:12:38.236 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:38.236 other utilities. 00:12:38.236 09:41:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:12:38.236 09:41:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:38.236 09:41:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:38.236 09:41:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:38.236 09:41:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:39.170 Creating new GPT entries in memory. 00:12:39.171 The operation has completed successfully. 00:12:39.171 09:41:02 -- setup/common.sh@57 -- # (( part++ )) 00:12:39.171 09:41:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:39.171 09:41:02 -- setup/common.sh@62 -- # wait 56626 00:12:39.482 09:41:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.482 09:41:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:12:39.482 09:41:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.482 09:41:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:39.482 09:41:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:39.482 09:41:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.482 09:41:02 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:39.482 09:41:02 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:39.482 09:41:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:39.482 09:41:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.482 09:41:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:39.482 09:41:02 -- setup/devices.sh@53 -- # local found=0 00:12:39.482 09:41:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:39.482 09:41:02 -- setup/devices.sh@56 -- # : 00:12:39.482 09:41:02 -- setup/devices.sh@59 -- # local pci status 00:12:39.482 09:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.482 09:41:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:39.482 09:41:02 -- setup/devices.sh@47 -- # setup output config 00:12:39.482 09:41:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:39.482 09:41:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:39.482 09:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.482 09:41:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:39.482 09:41:02 -- setup/devices.sh@63 -- # found=1 00:12:39.482 09:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.482 09:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.482 09:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.740 09:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.740 09:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.740 09:41:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.740 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.740 09:41:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:39.740 09:41:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:39.740 09:41:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.740 09:41:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:39.740 09:41:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:39.740 09:41:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:12:39.740 09:41:03 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.998 09:41:03 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:39.998 09:41:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:39.998 09:41:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:39.998 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:39.998 09:41:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:39.998 09:41:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:40.257 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:40.257 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:40.257 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:40.257 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:40.257 09:41:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:12:40.257 09:41:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:12:40.257 09:41:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:40.257 09:41:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:40.257 09:41:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:40.257 09:41:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:40.257 09:41:03 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:40.257 09:41:03 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:40.257 09:41:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:40.257 09:41:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:40.257 09:41:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:40.257 09:41:03 -- setup/devices.sh@53 -- # local found=0 00:12:40.257 09:41:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:40.257 09:41:03 -- setup/devices.sh@56 -- # : 00:12:40.257 09:41:03 -- setup/devices.sh@59 -- # local pci status 00:12:40.257 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.257 09:41:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:40.257 09:41:03 -- setup/devices.sh@47 -- # setup output config 00:12:40.257 09:41:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:40.257 09:41:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:40.516 09:41:03 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.516 09:41:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:40.516 09:41:03 -- setup/devices.sh@63 -- # found=1 00:12:40.516 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.516 09:41:03 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.516 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.516 09:41:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.516 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.516 09:41:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.516 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.775 09:41:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:40.775 09:41:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:40.775 09:41:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:40.775 09:41:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:40.775 09:41:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:40.775 09:41:03 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:40.775 09:41:03 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:12:40.775 09:41:03 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:40.775 09:41:03 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:40.775 09:41:03 -- setup/devices.sh@50 -- # local mount_point= 00:12:40.775 09:41:03 -- setup/devices.sh@51 -- # local test_file= 00:12:40.775 09:41:03 -- setup/devices.sh@53 -- # local found=0 00:12:40.775 09:41:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:40.775 09:41:03 -- setup/devices.sh@59 -- # local pci status 00:12:40.775 09:41:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.775 09:41:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:40.775 09:41:03 -- setup/devices.sh@47 -- # setup output config 00:12:40.775 09:41:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:40.775 09:41:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:41.033 09:41:04 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.034 09:41:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:41.034 09:41:04 -- setup/devices.sh@63 -- # found=1 00:12:41.034 09:41:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.034 09:41:04 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.034 09:41:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.034 09:41:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.034 09:41:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.292 09:41:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.292 09:41:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.292 09:41:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:41.292 09:41:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:41.292 09:41:04 -- setup/devices.sh@68 -- # return 0 00:12:41.292 09:41:04 -- setup/devices.sh@128 -- # cleanup_nvme 00:12:41.292 09:41:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:41.292 09:41:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:41.292 09:41:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:41.292 09:41:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:41.292 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:41.292 ************************************ 00:12:41.292 END TEST nvme_mount 00:12:41.292 ************************************ 00:12:41.292 00:12:41.292 real 0m4.082s 00:12:41.292 user 0m0.698s 00:12:41.292 sys 0m1.082s 00:12:41.292 09:41:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.292 09:41:04 -- common/autotest_common.sh@10 -- # set +x 00:12:41.292 09:41:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:41.292 09:41:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:41.292 09:41:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.292 09:41:04 -- common/autotest_common.sh@10 -- # set +x 00:12:41.551 ************************************ 00:12:41.551 START TEST dm_mount 00:12:41.551 ************************************ 00:12:41.551 09:41:04 -- common/autotest_common.sh@1111 -- # dm_mount 00:12:41.551 09:41:04 -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:41.551 09:41:04 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:41.551 09:41:04 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:41.551 09:41:04 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:41.551 09:41:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:41.551 09:41:04 -- setup/common.sh@40 -- # local part_no=2 00:12:41.551 09:41:04 -- setup/common.sh@41 -- # local size=1073741824 00:12:41.551 09:41:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:41.551 09:41:04 -- setup/common.sh@44 -- # parts=() 00:12:41.551 09:41:04 -- setup/common.sh@44 -- # local parts 00:12:41.551 09:41:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:12:41.551 09:41:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:41.551 09:41:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:41.551 09:41:04 -- setup/common.sh@46 -- # (( part++ )) 00:12:41.551 09:41:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:41.551 09:41:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:41.551 09:41:04 -- setup/common.sh@46 -- # (( part++ )) 00:12:41.551 09:41:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:41.551 09:41:04 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:41.551 09:41:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:41.551 09:41:04 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:42.487 Creating new GPT entries in memory. 00:12:42.487 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:42.487 other utilities. 00:12:42.487 09:41:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:12:42.487 09:41:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:42.487 09:41:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:42.487 09:41:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:42.487 09:41:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:43.444 Creating new GPT entries in memory. 00:12:43.444 The operation has completed successfully. 00:12:43.444 09:41:06 -- setup/common.sh@57 -- # (( part++ )) 00:12:43.444 09:41:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:43.444 09:41:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:43.444 09:41:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:43.444 09:41:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:12:44.380 The operation has completed successfully. 00:12:44.380 09:41:07 -- setup/common.sh@57 -- # (( part++ )) 00:12:44.380 09:41:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:44.380 09:41:07 -- setup/common.sh@62 -- # wait 57069 00:12:44.639 09:41:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:44.639 09:41:07 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:44.639 09:41:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:44.639 09:41:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:44.639 09:41:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:12:44.639 09:41:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:44.639 09:41:07 -- setup/devices.sh@161 -- # break 00:12:44.639 09:41:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:44.639 09:41:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:44.639 09:41:07 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:44.639 09:41:07 -- setup/devices.sh@166 -- # dm=dm-0 00:12:44.639 09:41:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:44.639 09:41:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:44.639 09:41:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:44.639 09:41:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:12:44.639 09:41:07 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:44.639 09:41:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:44.639 09:41:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:44.639 09:41:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:44.639 09:41:07 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:44.639 09:41:07 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:44.639 09:41:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:44.639 09:41:07 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:44.639 09:41:07 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:44.639 09:41:07 -- setup/devices.sh@53 -- # local found=0 00:12:44.639 09:41:07 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:44.639 09:41:07 -- setup/devices.sh@56 -- # : 00:12:44.639 09:41:07 -- setup/devices.sh@59 -- # local pci status 00:12:44.639 09:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.639 09:41:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:44.639 09:41:07 -- setup/devices.sh@47 -- # setup output config 00:12:44.639 09:41:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:44.639 09:41:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:44.899 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.899 09:41:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:44.899 09:41:08 -- setup/devices.sh@63 -- # found=1 00:12:44.899 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.899 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.899 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.899 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.899 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.159 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.159 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.159 09:41:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:45.159 09:41:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:12:45.159 09:41:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:45.159 09:41:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:45.159 09:41:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:45.159 09:41:08 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:45.159 09:41:08 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:45.159 09:41:08 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:45.159 09:41:08 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:45.159 09:41:08 -- setup/devices.sh@50 -- # local mount_point= 00:12:45.159 09:41:08 -- setup/devices.sh@51 -- # local test_file= 00:12:45.159 09:41:08 -- setup/devices.sh@53 -- # local found=0 00:12:45.159 09:41:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:45.159 09:41:08 -- setup/devices.sh@59 -- # local pci status 00:12:45.159 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.159 09:41:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:45.159 09:41:08 -- setup/devices.sh@47 -- # setup output config 00:12:45.159 09:41:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:45.159 09:41:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:45.418 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.418 09:41:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:45.418 09:41:08 -- setup/devices.sh@63 -- # found=1 00:12:45.418 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.418 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.418 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.418 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.418 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.676 09:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.676 09:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.676 09:41:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:45.676 09:41:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:45.676 09:41:08 -- setup/devices.sh@68 -- # return 0 00:12:45.676 09:41:08 -- setup/devices.sh@187 -- # cleanup_dm 00:12:45.676 09:41:08 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:45.676 09:41:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:45.676 09:41:08 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:45.676 09:41:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:45.676 09:41:08 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:45.676 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:45.676 09:41:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:45.676 09:41:08 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:45.676 00:12:45.676 real 0m4.272s 00:12:45.676 user 0m0.459s 00:12:45.676 sys 0m0.760s 00:12:45.676 09:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.676 ************************************ 00:12:45.676 END TEST dm_mount 00:12:45.676 ************************************ 00:12:45.676 09:41:08 -- common/autotest_common.sh@10 -- # set +x 00:12:45.676 09:41:08 -- setup/devices.sh@1 -- # cleanup 00:12:45.676 09:41:08 -- setup/devices.sh@11 -- # cleanup_nvme 00:12:45.676 09:41:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:45.676 09:41:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:45.676 09:41:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:45.676 09:41:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:45.676 09:41:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:45.934 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:45.934 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:45.934 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:45.934 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:45.934 09:41:09 -- setup/devices.sh@12 -- # cleanup_dm 00:12:45.934 09:41:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:45.934 09:41:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:45.934 09:41:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:45.934 09:41:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:45.934 09:41:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:45.934 09:41:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:45.934 00:12:45.934 real 0m10.080s 00:12:45.934 user 0m1.881s 00:12:45.934 sys 0m2.522s 00:12:45.934 09:41:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.934 09:41:09 -- common/autotest_common.sh@10 -- # set +x 00:12:45.934 ************************************ 00:12:45.934 END TEST devices 00:12:45.934 ************************************ 00:12:46.193 00:12:46.193 real 0m23.399s 00:12:46.193 user 0m7.537s 00:12:46.193 sys 0m9.893s 00:12:46.193 09:41:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.193 09:41:09 -- common/autotest_common.sh@10 -- # set +x 00:12:46.193 ************************************ 00:12:46.193 END TEST setup.sh 00:12:46.193 ************************************ 00:12:46.193 09:41:09 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:46.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:46.760 Hugepages 00:12:46.760 node hugesize free / total 00:12:46.760 node0 1048576kB 0 / 0 00:12:46.760 node0 2048kB 2048 / 2048 00:12:46.760 00:12:46.760 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:46.760 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:47.045 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:47.045 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:12:47.045 09:41:10 -- spdk/autotest.sh@130 -- # uname -s 00:12:47.045 09:41:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:12:47.045 09:41:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:12:47.045 09:41:10 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:47.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:47.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.872 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.872 09:41:11 -- common/autotest_common.sh@1518 -- # sleep 1 00:12:48.806 09:41:12 -- common/autotest_common.sh@1519 -- # bdfs=() 00:12:48.806 09:41:12 -- common/autotest_common.sh@1519 -- # local bdfs 00:12:48.806 09:41:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:12:48.806 09:41:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:12:48.806 09:41:12 -- common/autotest_common.sh@1499 -- # bdfs=() 00:12:48.806 09:41:12 -- common/autotest_common.sh@1499 -- # local bdfs 00:12:48.806 09:41:12 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:48.806 09:41:12 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:48.806 09:41:12 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:12:48.806 09:41:12 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:12:48.806 09:41:12 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:48.806 09:41:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:49.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:49.371 Waiting for block devices as requested 00:12:49.371 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.371 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.630 09:41:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:12:49.630 09:41:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:12:49.630 09:41:12 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:12:49.630 09:41:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:12:49.630 09:41:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:12:49.630 09:41:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1543 -- # continue 00:12:49.630 09:41:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:12:49.630 09:41:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:49.630 09:41:12 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:12:49.630 09:41:12 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:12:49.630 09:41:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:12:49.630 09:41:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:12:49.630 09:41:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:12:49.630 09:41:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:12:49.630 09:41:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:12:49.630 09:41:12 -- common/autotest_common.sh@1543 -- # continue 00:12:49.630 09:41:12 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:12:49.630 09:41:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:49.630 09:41:12 -- common/autotest_common.sh@10 -- # set +x 00:12:49.630 09:41:12 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:12:49.630 09:41:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:49.630 09:41:12 -- common/autotest_common.sh@10 -- # set +x 00:12:49.630 09:41:12 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:50.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:50.462 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:50.462 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:50.462 09:41:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:12:50.462 09:41:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:50.462 09:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:50.462 09:41:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:12:50.462 09:41:13 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:12:50.462 09:41:13 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:12:50.462 09:41:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:12:50.462 09:41:13 -- common/autotest_common.sh@1563 -- # local bdfs 00:12:50.462 09:41:13 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:12:50.462 09:41:13 -- common/autotest_common.sh@1499 -- # bdfs=() 00:12:50.462 09:41:13 -- common/autotest_common.sh@1499 -- # local bdfs 00:12:50.462 09:41:13 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:50.462 09:41:13 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:50.462 09:41:13 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:12:50.720 09:41:13 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:12:50.720 09:41:13 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:50.720 09:41:13 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:12:50.720 09:41:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:12:50.720 09:41:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:12:50.720 09:41:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:50.720 09:41:13 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:12:50.720 09:41:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:12:50.720 09:41:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:12:50.720 09:41:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:50.720 09:41:13 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:12:50.720 09:41:13 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:12:50.720 09:41:13 -- common/autotest_common.sh@1579 -- # return 0 00:12:50.720 09:41:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:12:50.720 09:41:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:12:50.720 09:41:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:50.720 09:41:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:50.720 09:41:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:12:50.720 09:41:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:50.720 09:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:50.720 09:41:13 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:50.720 09:41:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:50.720 09:41:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.720 09:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:50.720 ************************************ 00:12:50.720 START TEST env 00:12:50.720 ************************************ 00:12:50.720 09:41:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:50.720 * Looking for test storage... 00:12:50.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:12:50.720 09:41:14 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:50.720 09:41:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:50.720 09:41:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.720 09:41:14 -- common/autotest_common.sh@10 -- # set +x 00:12:50.978 ************************************ 00:12:50.978 START TEST env_memory 00:12:50.978 ************************************ 00:12:50.978 09:41:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:50.978 00:12:50.978 00:12:50.978 CUnit - A unit testing framework for C - Version 2.1-3 00:12:50.978 http://cunit.sourceforge.net/ 00:12:50.978 00:12:50.978 00:12:50.978 Suite: memory 00:12:50.978 Test: alloc and free memory map ...[2024-04-18 09:41:14.238089] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:12:50.978 passed 00:12:50.978 Test: mem map translation ...[2024-04-18 09:41:14.309997] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:12:50.978 [2024-04-18 09:41:14.310149] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:12:50.978 [2024-04-18 09:41:14.310259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:12:50.978 [2024-04-18 09:41:14.310295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:12:51.236 passed 00:12:51.236 Test: mem map registration ...[2024-04-18 09:41:14.410065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:12:51.236 [2024-04-18 09:41:14.410201] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:12:51.236 passed 00:12:51.236 Test: mem map adjacent registrations ...passed 00:12:51.236 00:12:51.236 Run Summary: Type Total Ran Passed Failed Inactive 00:12:51.236 suites 1 1 n/a 0 0 00:12:51.236 tests 4 4 4 0 0 00:12:51.236 asserts 152 152 152 0 n/a 00:12:51.236 00:12:51.236 Elapsed time = 0.354 seconds 00:12:51.236 00:12:51.236 real 0m0.397s 00:12:51.236 user 0m0.360s 00:12:51.236 sys 0m0.030s 00:12:51.236 09:41:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.236 09:41:14 -- common/autotest_common.sh@10 -- # set +x 00:12:51.236 ************************************ 00:12:51.236 END TEST env_memory 00:12:51.236 ************************************ 00:12:51.236 09:41:14 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:51.236 09:41:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:51.236 09:41:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.236 09:41:14 -- common/autotest_common.sh@10 -- # set +x 00:12:51.494 ************************************ 00:12:51.494 START TEST env_vtophys 00:12:51.494 ************************************ 00:12:51.494 09:41:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:51.494 EAL: lib.eal log level changed from notice to debug 00:12:51.494 EAL: Detected lcore 0 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 1 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 2 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 3 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 4 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 5 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 6 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 7 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 8 as core 0 on socket 0 00:12:51.494 EAL: Detected lcore 9 as core 0 on socket 0 00:12:51.494 EAL: Maximum logical cores by configuration: 128 00:12:51.494 EAL: Detected CPU lcores: 10 00:12:51.494 EAL: Detected NUMA nodes: 1 00:12:51.494 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:12:51.494 EAL: Detected shared linkage of DPDK 00:12:51.494 EAL: No shared files mode enabled, IPC will be disabled 00:12:51.494 EAL: Selected IOVA mode 'PA' 00:12:51.494 EAL: Probing VFIO support... 00:12:51.494 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:51.494 EAL: VFIO modules not loaded, skipping VFIO support... 00:12:51.494 EAL: Ask a virtual area of 0x2e000 bytes 00:12:51.494 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:12:51.494 EAL: Setting up physically contiguous memory... 00:12:51.494 EAL: Setting maximum number of open files to 524288 00:12:51.494 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:12:51.494 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:12:51.494 EAL: Ask a virtual area of 0x61000 bytes 00:12:51.494 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:12:51.494 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:51.494 EAL: Ask a virtual area of 0x400000000 bytes 00:12:51.494 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:12:51.494 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:12:51.494 EAL: Ask a virtual area of 0x61000 bytes 00:12:51.494 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:12:51.494 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:51.494 EAL: Ask a virtual area of 0x400000000 bytes 00:12:51.494 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:12:51.495 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:12:51.495 EAL: Ask a virtual area of 0x61000 bytes 00:12:51.495 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:12:51.495 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:51.495 EAL: Ask a virtual area of 0x400000000 bytes 00:12:51.495 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:12:51.495 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:12:51.495 EAL: Ask a virtual area of 0x61000 bytes 00:12:51.495 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:12:51.495 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:51.495 EAL: Ask a virtual area of 0x400000000 bytes 00:12:51.495 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:12:51.495 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:12:51.495 EAL: Hugepages will be freed exactly as allocated. 00:12:51.495 EAL: No shared files mode enabled, IPC is disabled 00:12:51.495 EAL: No shared files mode enabled, IPC is disabled 00:12:51.751 EAL: TSC frequency is ~2200000 KHz 00:12:51.751 EAL: Main lcore 0 is ready (tid=7f42bc20ea40;cpuset=[0]) 00:12:51.751 EAL: Trying to obtain current memory policy. 00:12:51.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:51.751 EAL: Restoring previous memory policy: 0 00:12:51.751 EAL: request: mp_malloc_sync 00:12:51.751 EAL: No shared files mode enabled, IPC is disabled 00:12:51.751 EAL: Heap on socket 0 was expanded by 2MB 00:12:51.751 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:51.751 EAL: No PCI address specified using 'addr=' in: bus=pci 00:12:51.751 EAL: Mem event callback 'spdk:(nil)' registered 00:12:51.751 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:12:51.751 00:12:51.751 00:12:51.751 CUnit - A unit testing framework for C - Version 2.1-3 00:12:51.751 http://cunit.sourceforge.net/ 00:12:51.751 00:12:51.751 00:12:51.751 Suite: components_suite 00:12:52.317 Test: vtophys_malloc_test ...passed 00:12:52.317 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:12:52.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.317 EAL: Restoring previous memory policy: 4 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was expanded by 4MB 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was shrunk by 4MB 00:12:52.317 EAL: Trying to obtain current memory policy. 00:12:52.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.317 EAL: Restoring previous memory policy: 4 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was expanded by 6MB 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was shrunk by 6MB 00:12:52.317 EAL: Trying to obtain current memory policy. 00:12:52.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.317 EAL: Restoring previous memory policy: 4 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was expanded by 10MB 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was shrunk by 10MB 00:12:52.317 EAL: Trying to obtain current memory policy. 00:12:52.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.317 EAL: Restoring previous memory policy: 4 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was expanded by 18MB 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was shrunk by 18MB 00:12:52.317 EAL: Trying to obtain current memory policy. 00:12:52.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.317 EAL: Restoring previous memory policy: 4 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was expanded by 34MB 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.317 EAL: No shared files mode enabled, IPC is disabled 00:12:52.317 EAL: Heap on socket 0 was shrunk by 34MB 00:12:52.317 EAL: Trying to obtain current memory policy. 00:12:52.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.317 EAL: Restoring previous memory policy: 4 00:12:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.317 EAL: request: mp_malloc_sync 00:12:52.318 EAL: No shared files mode enabled, IPC is disabled 00:12:52.318 EAL: Heap on socket 0 was expanded by 66MB 00:12:52.576 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.576 EAL: request: mp_malloc_sync 00:12:52.576 EAL: No shared files mode enabled, IPC is disabled 00:12:52.576 EAL: Heap on socket 0 was shrunk by 66MB 00:12:52.576 EAL: Trying to obtain current memory policy. 00:12:52.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.576 EAL: Restoring previous memory policy: 4 00:12:52.576 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.576 EAL: request: mp_malloc_sync 00:12:52.576 EAL: No shared files mode enabled, IPC is disabled 00:12:52.576 EAL: Heap on socket 0 was expanded by 130MB 00:12:52.833 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.833 EAL: request: mp_malloc_sync 00:12:52.833 EAL: No shared files mode enabled, IPC is disabled 00:12:52.833 EAL: Heap on socket 0 was shrunk by 130MB 00:12:53.092 EAL: Trying to obtain current memory policy. 00:12:53.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:53.092 EAL: Restoring previous memory policy: 4 00:12:53.092 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.092 EAL: request: mp_malloc_sync 00:12:53.092 EAL: No shared files mode enabled, IPC is disabled 00:12:53.092 EAL: Heap on socket 0 was expanded by 258MB 00:12:53.707 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.707 EAL: request: mp_malloc_sync 00:12:53.707 EAL: No shared files mode enabled, IPC is disabled 00:12:53.707 EAL: Heap on socket 0 was shrunk by 258MB 00:12:53.966 EAL: Trying to obtain current memory policy. 00:12:53.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:53.966 EAL: Restoring previous memory policy: 4 00:12:53.966 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.966 EAL: request: mp_malloc_sync 00:12:53.966 EAL: No shared files mode enabled, IPC is disabled 00:12:53.966 EAL: Heap on socket 0 was expanded by 514MB 00:12:54.902 EAL: Calling mem event callback 'spdk:(nil)' 00:12:55.160 EAL: request: mp_malloc_sync 00:12:55.160 EAL: No shared files mode enabled, IPC is disabled 00:12:55.160 EAL: Heap on socket 0 was shrunk by 514MB 00:12:55.728 EAL: Trying to obtain current memory policy. 00:12:55.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:55.986 EAL: Restoring previous memory policy: 4 00:12:55.986 EAL: Calling mem event callback 'spdk:(nil)' 00:12:55.986 EAL: request: mp_malloc_sync 00:12:55.986 EAL: No shared files mode enabled, IPC is disabled 00:12:55.986 EAL: Heap on socket 0 was expanded by 1026MB 00:12:57.891 EAL: Calling mem event callback 'spdk:(nil)' 00:12:57.891 EAL: request: mp_malloc_sync 00:12:57.891 EAL: No shared files mode enabled, IPC is disabled 00:12:57.891 EAL: Heap on socket 0 was shrunk by 1026MB 00:12:59.796 passed 00:12:59.796 00:12:59.796 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.796 suites 1 1 n/a 0 0 00:12:59.796 tests 2 2 2 0 0 00:12:59.796 asserts 5278 5278 5278 0 n/a 00:12:59.796 00:12:59.796 Elapsed time = 7.755 seconds 00:12:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:12:59.796 EAL: request: mp_malloc_sync 00:12:59.796 EAL: No shared files mode enabled, IPC is disabled 00:12:59.796 EAL: Heap on socket 0 was shrunk by 2MB 00:12:59.796 EAL: No shared files mode enabled, IPC is disabled 00:12:59.796 EAL: No shared files mode enabled, IPC is disabled 00:12:59.796 EAL: No shared files mode enabled, IPC is disabled 00:12:59.796 00:12:59.796 real 0m8.104s 00:12:59.796 user 0m6.828s 00:12:59.796 sys 0m1.105s 00:12:59.796 09:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:59.796 09:41:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.796 ************************************ 00:12:59.796 END TEST env_vtophys 00:12:59.796 ************************************ 00:12:59.796 09:41:22 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:59.796 09:41:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:59.796 09:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.796 09:41:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.796 ************************************ 00:12:59.796 START TEST env_pci 00:12:59.796 ************************************ 00:12:59.796 09:41:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:59.796 00:12:59.796 00:12:59.796 CUnit - A unit testing framework for C - Version 2.1-3 00:12:59.796 http://cunit.sourceforge.net/ 00:12:59.796 00:12:59.796 00:12:59.796 Suite: pci 00:12:59.796 Test: pci_hook ...[2024-04-18 09:41:22.937656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58355 has claimed it 00:12:59.796 passed 00:12:59.796 00:12:59.796 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.796 suites 1 1 n/a 0 0 00:12:59.796 tests 1 1 1 0 0 00:12:59.796 asserts 25 25 25 0 n/a 00:12:59.796 00:12:59.796 Elapsed time = 0.005 seconds 00:12:59.796 EAL: Cannot find device (10000:00:01.0) 00:12:59.796 EAL: Failed to attach device on primary process 00:12:59.796 00:12:59.796 real 0m0.077s 00:12:59.796 user 0m0.035s 00:12:59.796 sys 0m0.041s 00:12:59.796 09:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:59.796 09:41:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.796 ************************************ 00:12:59.796 END TEST env_pci 00:12:59.796 ************************************ 00:12:59.796 09:41:23 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:12:59.796 09:41:23 -- env/env.sh@15 -- # uname 00:12:59.796 09:41:23 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:12:59.796 09:41:23 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:12:59.796 09:41:23 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:59.796 09:41:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:59.796 09:41:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.796 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:59.796 ************************************ 00:12:59.796 START TEST env_dpdk_post_init 00:12:59.796 ************************************ 00:12:59.796 09:41:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:59.796 EAL: Detected CPU lcores: 10 00:12:59.796 EAL: Detected NUMA nodes: 1 00:12:59.796 EAL: Detected shared linkage of DPDK 00:13:00.054 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:00.054 EAL: Selected IOVA mode 'PA' 00:13:00.054 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:00.054 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:13:00.054 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:13:00.054 Starting DPDK initialization... 00:13:00.054 Starting SPDK post initialization... 00:13:00.054 SPDK NVMe probe 00:13:00.054 Attaching to 0000:00:10.0 00:13:00.054 Attaching to 0000:00:11.0 00:13:00.054 Attached to 0000:00:10.0 00:13:00.054 Attached to 0000:00:11.0 00:13:00.054 Cleaning up... 00:13:00.054 00:13:00.054 real 0m0.293s 00:13:00.054 user 0m0.091s 00:13:00.054 sys 0m0.102s 00:13:00.054 09:41:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.054 ************************************ 00:13:00.054 END TEST env_dpdk_post_init 00:13:00.054 ************************************ 00:13:00.054 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.054 09:41:23 -- env/env.sh@26 -- # uname 00:13:00.313 09:41:23 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:00.313 09:41:23 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:00.313 09:41:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:00.313 09:41:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.313 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.313 ************************************ 00:13:00.313 START TEST env_mem_callbacks 00:13:00.313 ************************************ 00:13:00.313 09:41:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:00.313 EAL: Detected CPU lcores: 10 00:13:00.313 EAL: Detected NUMA nodes: 1 00:13:00.313 EAL: Detected shared linkage of DPDK 00:13:00.313 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:00.313 EAL: Selected IOVA mode 'PA' 00:13:00.571 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:00.571 00:13:00.571 00:13:00.571 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.571 http://cunit.sourceforge.net/ 00:13:00.571 00:13:00.571 00:13:00.571 Suite: memory 00:13:00.571 Test: test ... 00:13:00.571 register 0x200000200000 2097152 00:13:00.571 malloc 3145728 00:13:00.571 register 0x200000400000 4194304 00:13:00.571 buf 0x2000004fffc0 len 3145728 PASSED 00:13:00.571 malloc 64 00:13:00.571 buf 0x2000004ffec0 len 64 PASSED 00:13:00.571 malloc 4194304 00:13:00.571 register 0x200000800000 6291456 00:13:00.571 buf 0x2000009fffc0 len 4194304 PASSED 00:13:00.571 free 0x2000004fffc0 3145728 00:13:00.571 free 0x2000004ffec0 64 00:13:00.571 unregister 0x200000400000 4194304 PASSED 00:13:00.571 free 0x2000009fffc0 4194304 00:13:00.571 unregister 0x200000800000 6291456 PASSED 00:13:00.571 malloc 8388608 00:13:00.571 register 0x200000400000 10485760 00:13:00.571 buf 0x2000005fffc0 len 8388608 PASSED 00:13:00.571 free 0x2000005fffc0 8388608 00:13:00.571 unregister 0x200000400000 10485760 PASSED 00:13:00.571 passed 00:13:00.571 00:13:00.571 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.571 suites 1 1 n/a 0 0 00:13:00.571 tests 1 1 1 0 0 00:13:00.571 asserts 15 15 15 0 n/a 00:13:00.571 00:13:00.571 Elapsed time = 0.072 seconds 00:13:00.571 00:13:00.571 real 0m0.288s 00:13:00.571 user 0m0.120s 00:13:00.571 sys 0m0.065s 00:13:00.571 09:41:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.571 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.571 ************************************ 00:13:00.571 END TEST env_mem_callbacks 00:13:00.571 ************************************ 00:13:00.571 00:13:00.571 real 0m9.872s 00:13:00.571 user 0m7.673s 00:13:00.571 sys 0m1.733s 00:13:00.571 09:41:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.571 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.571 ************************************ 00:13:00.571 END TEST env 00:13:00.571 ************************************ 00:13:00.571 09:41:23 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:00.571 09:41:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:00.571 09:41:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.571 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.829 ************************************ 00:13:00.829 START TEST rpc 00:13:00.829 ************************************ 00:13:00.829 09:41:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:00.829 * Looking for test storage... 00:13:00.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:00.829 09:41:24 -- rpc/rpc.sh@65 -- # spdk_pid=58493 00:13:00.829 09:41:24 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:00.829 09:41:24 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:13:00.829 09:41:24 -- rpc/rpc.sh@67 -- # waitforlisten 58493 00:13:00.829 09:41:24 -- common/autotest_common.sh@817 -- # '[' -z 58493 ']' 00:13:00.829 09:41:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.829 09:41:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:00.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.829 09:41:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.829 09:41:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:00.829 09:41:24 -- common/autotest_common.sh@10 -- # set +x 00:13:00.829 [2024-04-18 09:41:24.207441] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:00.829 [2024-04-18 09:41:24.207625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58493 ] 00:13:01.089 [2024-04-18 09:41:24.388833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.350 [2024-04-18 09:41:24.677471] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:01.350 [2024-04-18 09:41:24.677576] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58493' to capture a snapshot of events at runtime. 00:13:01.350 [2024-04-18 09:41:24.677593] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.350 [2024-04-18 09:41:24.677608] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.350 [2024-04-18 09:41:24.677618] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58493 for offline analysis/debug. 00:13:01.350 [2024-04-18 09:41:24.677655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.283 09:41:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:02.283 09:41:25 -- common/autotest_common.sh@850 -- # return 0 00:13:02.283 09:41:25 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:02.283 09:41:25 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:02.283 09:41:25 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:02.283 09:41:25 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:02.283 09:41:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.283 09:41:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.283 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.283 ************************************ 00:13:02.283 START TEST rpc_integrity 00:13:02.283 ************************************ 00:13:02.283 09:41:25 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:13:02.283 09:41:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.283 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.283 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.283 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.283 09:41:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:02.283 09:41:25 -- rpc/rpc.sh@13 -- # jq length 00:13:02.283 09:41:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:02.283 09:41:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:02.283 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.283 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.283 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.283 09:41:25 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:02.283 09:41:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:02.283 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.283 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.283 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.283 09:41:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:02.283 { 00:13:02.283 "name": "Malloc0", 00:13:02.283 "aliases": [ 00:13:02.283 "0c370f1c-68fe-4498-9e93-37354dde0764" 00:13:02.283 ], 00:13:02.283 "product_name": "Malloc disk", 00:13:02.283 "block_size": 512, 00:13:02.283 "num_blocks": 16384, 00:13:02.283 "uuid": "0c370f1c-68fe-4498-9e93-37354dde0764", 00:13:02.283 "assigned_rate_limits": { 00:13:02.283 "rw_ios_per_sec": 0, 00:13:02.283 "rw_mbytes_per_sec": 0, 00:13:02.283 "r_mbytes_per_sec": 0, 00:13:02.283 "w_mbytes_per_sec": 0 00:13:02.283 }, 00:13:02.283 "claimed": false, 00:13:02.283 "zoned": false, 00:13:02.283 "supported_io_types": { 00:13:02.284 "read": true, 00:13:02.284 "write": true, 00:13:02.284 "unmap": true, 00:13:02.284 "write_zeroes": true, 00:13:02.284 "flush": true, 00:13:02.284 "reset": true, 00:13:02.284 "compare": false, 00:13:02.284 "compare_and_write": false, 00:13:02.284 "abort": true, 00:13:02.284 "nvme_admin": false, 00:13:02.284 "nvme_io": false 00:13:02.284 }, 00:13:02.284 "memory_domains": [ 00:13:02.284 { 00:13:02.284 "dma_device_id": "system", 00:13:02.284 "dma_device_type": 1 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.284 "dma_device_type": 2 00:13:02.284 } 00:13:02.284 ], 00:13:02.284 "driver_specific": {} 00:13:02.284 } 00:13:02.284 ]' 00:13:02.284 09:41:25 -- rpc/rpc.sh@17 -- # jq length 00:13:02.542 09:41:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:02.542 09:41:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:02.542 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.542 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.542 [2024-04-18 09:41:25.710570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:02.542 [2024-04-18 09:41:25.710703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.542 [2024-04-18 09:41:25.710737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:02.542 [2024-04-18 09:41:25.710755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.542 [2024-04-18 09:41:25.713946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.542 [2024-04-18 09:41:25.714020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:02.542 Passthru0 00:13:02.542 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.542 09:41:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:02.542 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.542 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.542 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.542 09:41:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:02.542 { 00:13:02.542 "name": "Malloc0", 00:13:02.542 "aliases": [ 00:13:02.542 "0c370f1c-68fe-4498-9e93-37354dde0764" 00:13:02.543 ], 00:13:02.543 "product_name": "Malloc disk", 00:13:02.543 "block_size": 512, 00:13:02.543 "num_blocks": 16384, 00:13:02.543 "uuid": "0c370f1c-68fe-4498-9e93-37354dde0764", 00:13:02.543 "assigned_rate_limits": { 00:13:02.543 "rw_ios_per_sec": 0, 00:13:02.543 "rw_mbytes_per_sec": 0, 00:13:02.543 "r_mbytes_per_sec": 0, 00:13:02.543 "w_mbytes_per_sec": 0 00:13:02.543 }, 00:13:02.543 "claimed": true, 00:13:02.543 "claim_type": "exclusive_write", 00:13:02.543 "zoned": false, 00:13:02.543 "supported_io_types": { 00:13:02.543 "read": true, 00:13:02.543 "write": true, 00:13:02.543 "unmap": true, 00:13:02.543 "write_zeroes": true, 00:13:02.543 "flush": true, 00:13:02.543 "reset": true, 00:13:02.543 "compare": false, 00:13:02.543 "compare_and_write": false, 00:13:02.543 "abort": true, 00:13:02.543 "nvme_admin": false, 00:13:02.543 "nvme_io": false 00:13:02.543 }, 00:13:02.543 "memory_domains": [ 00:13:02.543 { 00:13:02.543 "dma_device_id": "system", 00:13:02.543 "dma_device_type": 1 00:13:02.543 }, 00:13:02.543 { 00:13:02.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.543 "dma_device_type": 2 00:13:02.543 } 00:13:02.543 ], 00:13:02.543 "driver_specific": {} 00:13:02.543 }, 00:13:02.543 { 00:13:02.543 "name": "Passthru0", 00:13:02.543 "aliases": [ 00:13:02.543 "1cedaf49-9884-5fd0-b43b-dafd880cb31d" 00:13:02.543 ], 00:13:02.543 "product_name": "passthru", 00:13:02.543 "block_size": 512, 00:13:02.543 "num_blocks": 16384, 00:13:02.543 "uuid": "1cedaf49-9884-5fd0-b43b-dafd880cb31d", 00:13:02.543 "assigned_rate_limits": { 00:13:02.543 "rw_ios_per_sec": 0, 00:13:02.543 "rw_mbytes_per_sec": 0, 00:13:02.543 "r_mbytes_per_sec": 0, 00:13:02.543 "w_mbytes_per_sec": 0 00:13:02.543 }, 00:13:02.543 "claimed": false, 00:13:02.543 "zoned": false, 00:13:02.543 "supported_io_types": { 00:13:02.543 "read": true, 00:13:02.543 "write": true, 00:13:02.543 "unmap": true, 00:13:02.543 "write_zeroes": true, 00:13:02.543 "flush": true, 00:13:02.543 "reset": true, 00:13:02.543 "compare": false, 00:13:02.543 "compare_and_write": false, 00:13:02.543 "abort": true, 00:13:02.543 "nvme_admin": false, 00:13:02.543 "nvme_io": false 00:13:02.543 }, 00:13:02.543 "memory_domains": [ 00:13:02.543 { 00:13:02.543 "dma_device_id": "system", 00:13:02.543 "dma_device_type": 1 00:13:02.543 }, 00:13:02.543 { 00:13:02.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.543 "dma_device_type": 2 00:13:02.543 } 00:13:02.543 ], 00:13:02.543 "driver_specific": { 00:13:02.543 "passthru": { 00:13:02.543 "name": "Passthru0", 00:13:02.543 "base_bdev_name": "Malloc0" 00:13:02.543 } 00:13:02.543 } 00:13:02.543 } 00:13:02.543 ]' 00:13:02.543 09:41:25 -- rpc/rpc.sh@21 -- # jq length 00:13:02.543 09:41:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:02.543 09:41:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:02.543 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.543 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.543 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.543 09:41:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:02.543 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.543 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.543 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.543 09:41:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:02.543 09:41:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.543 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.543 09:41:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.543 09:41:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:02.543 09:41:25 -- rpc/rpc.sh@26 -- # jq length 00:13:02.543 09:41:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:02.543 00:13:02.543 real 0m0.367s 00:13:02.543 user 0m0.223s 00:13:02.543 sys 0m0.046s 00:13:02.543 09:41:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.543 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.543 ************************************ 00:13:02.543 END TEST rpc_integrity 00:13:02.543 ************************************ 00:13:02.802 09:41:25 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:02.802 09:41:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.802 09:41:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.802 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.802 ************************************ 00:13:02.802 START TEST rpc_plugins 00:13:02.802 ************************************ 00:13:02.802 09:41:26 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:13:02.802 09:41:26 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:02.802 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.802 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.802 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.802 09:41:26 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:02.802 09:41:26 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:02.802 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.802 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.802 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.802 09:41:26 -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:02.802 { 00:13:02.802 "name": "Malloc1", 00:13:02.802 "aliases": [ 00:13:02.802 "f9c4e0ac-751c-4953-ad16-d7b69cad77f8" 00:13:02.802 ], 00:13:02.802 "product_name": "Malloc disk", 00:13:02.802 "block_size": 4096, 00:13:02.802 "num_blocks": 256, 00:13:02.802 "uuid": "f9c4e0ac-751c-4953-ad16-d7b69cad77f8", 00:13:02.802 "assigned_rate_limits": { 00:13:02.802 "rw_ios_per_sec": 0, 00:13:02.802 "rw_mbytes_per_sec": 0, 00:13:02.802 "r_mbytes_per_sec": 0, 00:13:02.802 "w_mbytes_per_sec": 0 00:13:02.802 }, 00:13:02.802 "claimed": false, 00:13:02.802 "zoned": false, 00:13:02.802 "supported_io_types": { 00:13:02.802 "read": true, 00:13:02.802 "write": true, 00:13:02.802 "unmap": true, 00:13:02.802 "write_zeroes": true, 00:13:02.802 "flush": true, 00:13:02.802 "reset": true, 00:13:02.802 "compare": false, 00:13:02.802 "compare_and_write": false, 00:13:02.802 "abort": true, 00:13:02.802 "nvme_admin": false, 00:13:02.802 "nvme_io": false 00:13:02.802 }, 00:13:02.802 "memory_domains": [ 00:13:02.802 { 00:13:02.802 "dma_device_id": "system", 00:13:02.802 "dma_device_type": 1 00:13:02.802 }, 00:13:02.802 { 00:13:02.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.802 "dma_device_type": 2 00:13:02.802 } 00:13:02.802 ], 00:13:02.802 "driver_specific": {} 00:13:02.802 } 00:13:02.802 ]' 00:13:02.802 09:41:26 -- rpc/rpc.sh@32 -- # jq length 00:13:02.802 09:41:26 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:02.802 09:41:26 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:02.802 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.802 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.802 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.802 09:41:26 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:02.802 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.802 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.802 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.802 09:41:26 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:02.802 09:41:26 -- rpc/rpc.sh@36 -- # jq length 00:13:03.060 09:41:26 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:03.060 00:13:03.060 real 0m0.171s 00:13:03.060 user 0m0.105s 00:13:03.060 sys 0m0.026s 00:13:03.060 09:41:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.060 ************************************ 00:13:03.060 END TEST rpc_plugins 00:13:03.060 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.060 ************************************ 00:13:03.060 09:41:26 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:03.060 09:41:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.060 09:41:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.060 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.060 ************************************ 00:13:03.060 START TEST rpc_trace_cmd_test 00:13:03.060 ************************************ 00:13:03.060 09:41:26 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:13:03.060 09:41:26 -- rpc/rpc.sh@40 -- # local info 00:13:03.060 09:41:26 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:03.060 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.060 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.060 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.060 09:41:26 -- rpc/rpc.sh@42 -- # info='{ 00:13:03.060 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58493", 00:13:03.060 "tpoint_group_mask": "0x8", 00:13:03.060 "iscsi_conn": { 00:13:03.060 "mask": "0x2", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "scsi": { 00:13:03.060 "mask": "0x4", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "bdev": { 00:13:03.060 "mask": "0x8", 00:13:03.060 "tpoint_mask": "0xffffffffffffffff" 00:13:03.060 }, 00:13:03.060 "nvmf_rdma": { 00:13:03.060 "mask": "0x10", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "nvmf_tcp": { 00:13:03.060 "mask": "0x20", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "ftl": { 00:13:03.060 "mask": "0x40", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "blobfs": { 00:13:03.060 "mask": "0x80", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "dsa": { 00:13:03.060 "mask": "0x200", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "thread": { 00:13:03.060 "mask": "0x400", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "nvme_pcie": { 00:13:03.060 "mask": "0x800", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "iaa": { 00:13:03.060 "mask": "0x1000", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "nvme_tcp": { 00:13:03.060 "mask": "0x2000", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "bdev_nvme": { 00:13:03.060 "mask": "0x4000", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 }, 00:13:03.060 "sock": { 00:13:03.060 "mask": "0x8000", 00:13:03.060 "tpoint_mask": "0x0" 00:13:03.060 } 00:13:03.060 }' 00:13:03.060 09:41:26 -- rpc/rpc.sh@43 -- # jq length 00:13:03.060 09:41:26 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:13:03.060 09:41:26 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:03.060 09:41:26 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:03.318 09:41:26 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:03.318 09:41:26 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:03.318 09:41:26 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:03.318 09:41:26 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:03.318 09:41:26 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:03.318 09:41:26 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:03.318 00:13:03.318 real 0m0.273s 00:13:03.318 user 0m0.236s 00:13:03.318 sys 0m0.029s 00:13:03.318 09:41:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.318 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.318 ************************************ 00:13:03.318 END TEST rpc_trace_cmd_test 00:13:03.318 ************************************ 00:13:03.318 09:41:26 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:13:03.318 09:41:26 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:03.318 09:41:26 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:03.318 09:41:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.318 09:41:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.318 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 ************************************ 00:13:03.576 START TEST rpc_daemon_integrity 00:13:03.576 ************************************ 00:13:03.576 09:41:26 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:13:03.576 09:41:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.576 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.576 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.576 09:41:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:03.576 09:41:26 -- rpc/rpc.sh@13 -- # jq length 00:13:03.576 09:41:26 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:03.576 09:41:26 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:03.576 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.576 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.576 09:41:26 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:13:03.576 09:41:26 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:03.576 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.576 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.576 09:41:26 -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:03.576 { 00:13:03.576 "name": "Malloc2", 00:13:03.576 "aliases": [ 00:13:03.576 "184e23fa-0521-459d-8ee5-85c2b3ec3ff8" 00:13:03.576 ], 00:13:03.576 "product_name": "Malloc disk", 00:13:03.576 "block_size": 512, 00:13:03.576 "num_blocks": 16384, 00:13:03.576 "uuid": "184e23fa-0521-459d-8ee5-85c2b3ec3ff8", 00:13:03.576 "assigned_rate_limits": { 00:13:03.576 "rw_ios_per_sec": 0, 00:13:03.576 "rw_mbytes_per_sec": 0, 00:13:03.576 "r_mbytes_per_sec": 0, 00:13:03.576 "w_mbytes_per_sec": 0 00:13:03.576 }, 00:13:03.577 "claimed": false, 00:13:03.577 "zoned": false, 00:13:03.577 "supported_io_types": { 00:13:03.577 "read": true, 00:13:03.577 "write": true, 00:13:03.577 "unmap": true, 00:13:03.577 "write_zeroes": true, 00:13:03.577 "flush": true, 00:13:03.577 "reset": true, 00:13:03.577 "compare": false, 00:13:03.577 "compare_and_write": false, 00:13:03.577 "abort": true, 00:13:03.577 "nvme_admin": false, 00:13:03.577 "nvme_io": false 00:13:03.577 }, 00:13:03.577 "memory_domains": [ 00:13:03.577 { 00:13:03.577 "dma_device_id": "system", 00:13:03.577 "dma_device_type": 1 00:13:03.577 }, 00:13:03.577 { 00:13:03.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.577 "dma_device_type": 2 00:13:03.577 } 00:13:03.577 ], 00:13:03.577 "driver_specific": {} 00:13:03.577 } 00:13:03.577 ]' 00:13:03.577 09:41:26 -- rpc/rpc.sh@17 -- # jq length 00:13:03.577 09:41:26 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:03.577 09:41:26 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:13:03.577 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.577 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 [2024-04-18 09:41:26.893184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:13:03.577 [2024-04-18 09:41:26.893259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.577 [2024-04-18 09:41:26.893289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:13:03.577 [2024-04-18 09:41:26.893306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.577 [2024-04-18 09:41:26.896304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.577 [2024-04-18 09:41:26.896355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:03.577 Passthru0 00:13:03.577 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.577 09:41:26 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:03.577 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.577 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.577 09:41:26 -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:03.577 { 00:13:03.577 "name": "Malloc2", 00:13:03.577 "aliases": [ 00:13:03.577 "184e23fa-0521-459d-8ee5-85c2b3ec3ff8" 00:13:03.577 ], 00:13:03.577 "product_name": "Malloc disk", 00:13:03.577 "block_size": 512, 00:13:03.577 "num_blocks": 16384, 00:13:03.577 "uuid": "184e23fa-0521-459d-8ee5-85c2b3ec3ff8", 00:13:03.577 "assigned_rate_limits": { 00:13:03.577 "rw_ios_per_sec": 0, 00:13:03.577 "rw_mbytes_per_sec": 0, 00:13:03.577 "r_mbytes_per_sec": 0, 00:13:03.577 "w_mbytes_per_sec": 0 00:13:03.577 }, 00:13:03.577 "claimed": true, 00:13:03.577 "claim_type": "exclusive_write", 00:13:03.577 "zoned": false, 00:13:03.577 "supported_io_types": { 00:13:03.577 "read": true, 00:13:03.577 "write": true, 00:13:03.577 "unmap": true, 00:13:03.577 "write_zeroes": true, 00:13:03.577 "flush": true, 00:13:03.577 "reset": true, 00:13:03.577 "compare": false, 00:13:03.577 "compare_and_write": false, 00:13:03.577 "abort": true, 00:13:03.577 "nvme_admin": false, 00:13:03.577 "nvme_io": false 00:13:03.577 }, 00:13:03.577 "memory_domains": [ 00:13:03.577 { 00:13:03.577 "dma_device_id": "system", 00:13:03.577 "dma_device_type": 1 00:13:03.577 }, 00:13:03.577 { 00:13:03.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.577 "dma_device_type": 2 00:13:03.577 } 00:13:03.577 ], 00:13:03.577 "driver_specific": {} 00:13:03.577 }, 00:13:03.577 { 00:13:03.577 "name": "Passthru0", 00:13:03.577 "aliases": [ 00:13:03.577 "a94be2f7-22f0-5401-b19c-1254e9c75d9f" 00:13:03.577 ], 00:13:03.577 "product_name": "passthru", 00:13:03.577 "block_size": 512, 00:13:03.577 "num_blocks": 16384, 00:13:03.577 "uuid": "a94be2f7-22f0-5401-b19c-1254e9c75d9f", 00:13:03.577 "assigned_rate_limits": { 00:13:03.577 "rw_ios_per_sec": 0, 00:13:03.577 "rw_mbytes_per_sec": 0, 00:13:03.577 "r_mbytes_per_sec": 0, 00:13:03.577 "w_mbytes_per_sec": 0 00:13:03.577 }, 00:13:03.577 "claimed": false, 00:13:03.577 "zoned": false, 00:13:03.577 "supported_io_types": { 00:13:03.577 "read": true, 00:13:03.577 "write": true, 00:13:03.577 "unmap": true, 00:13:03.577 "write_zeroes": true, 00:13:03.577 "flush": true, 00:13:03.577 "reset": true, 00:13:03.577 "compare": false, 00:13:03.577 "compare_and_write": false, 00:13:03.577 "abort": true, 00:13:03.577 "nvme_admin": false, 00:13:03.577 "nvme_io": false 00:13:03.577 }, 00:13:03.577 "memory_domains": [ 00:13:03.577 { 00:13:03.577 "dma_device_id": "system", 00:13:03.577 "dma_device_type": 1 00:13:03.577 }, 00:13:03.577 { 00:13:03.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.577 "dma_device_type": 2 00:13:03.577 } 00:13:03.577 ], 00:13:03.577 "driver_specific": { 00:13:03.577 "passthru": { 00:13:03.577 "name": "Passthru0", 00:13:03.577 "base_bdev_name": "Malloc2" 00:13:03.577 } 00:13:03.577 } 00:13:03.577 } 00:13:03.577 ]' 00:13:03.577 09:41:26 -- rpc/rpc.sh@21 -- # jq length 00:13:03.835 09:41:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:03.835 09:41:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:03.835 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.835 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.835 09:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.835 09:41:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:03.835 09:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.835 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.835 09:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.835 09:41:27 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:03.835 09:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.835 09:41:27 -- common/autotest_common.sh@10 -- # set +x 00:13:03.835 09:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.835 09:41:27 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:03.835 09:41:27 -- rpc/rpc.sh@26 -- # jq length 00:13:03.835 ************************************ 00:13:03.835 END TEST rpc_daemon_integrity 00:13:03.835 ************************************ 00:13:03.835 09:41:27 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:03.835 00:13:03.835 real 0m0.349s 00:13:03.835 user 0m0.207s 00:13:03.835 sys 0m0.046s 00:13:03.835 09:41:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.835 09:41:27 -- common/autotest_common.sh@10 -- # set +x 00:13:03.835 09:41:27 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:03.835 09:41:27 -- rpc/rpc.sh@84 -- # killprocess 58493 00:13:03.835 09:41:27 -- common/autotest_common.sh@936 -- # '[' -z 58493 ']' 00:13:03.835 09:41:27 -- common/autotest_common.sh@940 -- # kill -0 58493 00:13:03.835 09:41:27 -- common/autotest_common.sh@941 -- # uname 00:13:03.835 09:41:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.835 09:41:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58493 00:13:03.835 killing process with pid 58493 00:13:03.835 09:41:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:03.835 09:41:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:03.835 09:41:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58493' 00:13:03.835 09:41:27 -- common/autotest_common.sh@955 -- # kill 58493 00:13:03.835 09:41:27 -- common/autotest_common.sh@960 -- # wait 58493 00:13:06.367 ************************************ 00:13:06.367 END TEST rpc 00:13:06.367 ************************************ 00:13:06.367 00:13:06.367 real 0m5.463s 00:13:06.367 user 0m6.203s 00:13:06.367 sys 0m1.051s 00:13:06.367 09:41:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.367 09:41:29 -- common/autotest_common.sh@10 -- # set +x 00:13:06.367 09:41:29 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:06.367 09:41:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:06.367 09:41:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.367 09:41:29 -- common/autotest_common.sh@10 -- # set +x 00:13:06.367 ************************************ 00:13:06.367 START TEST skip_rpc 00:13:06.367 ************************************ 00:13:06.367 09:41:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:06.367 * Looking for test storage... 00:13:06.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:06.367 09:41:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:06.367 09:41:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.367 09:41:29 -- common/autotest_common.sh@10 -- # set +x 00:13:06.367 ************************************ 00:13:06.367 START TEST skip_rpc 00:13:06.367 ************************************ 00:13:06.367 09:41:29 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58741 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:06.367 09:41:29 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:06.626 [2024-04-18 09:41:29.877057] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:06.626 [2024-04-18 09:41:29.877832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58741 ] 00:13:06.883 [2024-04-18 09:41:30.054622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.140 [2024-04-18 09:41:30.301755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.469 09:41:34 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:11.469 09:41:34 -- common/autotest_common.sh@638 -- # local es=0 00:13:11.469 09:41:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:11.469 09:41:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:13:11.469 09:41:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:11.469 09:41:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:13:11.469 09:41:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:11.469 09:41:34 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:13:11.469 09:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.469 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 09:41:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:11.469 09:41:34 -- common/autotest_common.sh@641 -- # es=1 00:13:11.469 09:41:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:11.469 09:41:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:11.469 09:41:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:11.469 09:41:34 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:11.469 09:41:34 -- rpc/skip_rpc.sh@23 -- # killprocess 58741 00:13:11.469 09:41:34 -- common/autotest_common.sh@936 -- # '[' -z 58741 ']' 00:13:11.469 09:41:34 -- common/autotest_common.sh@940 -- # kill -0 58741 00:13:11.469 09:41:34 -- common/autotest_common.sh@941 -- # uname 00:13:11.469 09:41:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.469 09:41:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58741 00:13:11.469 killing process with pid 58741 00:13:11.469 09:41:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:11.469 09:41:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:11.469 09:41:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58741' 00:13:11.469 09:41:34 -- common/autotest_common.sh@955 -- # kill 58741 00:13:11.469 09:41:34 -- common/autotest_common.sh@960 -- # wait 58741 00:13:14.000 ************************************ 00:13:14.000 END TEST skip_rpc 00:13:14.000 ************************************ 00:13:14.000 00:13:14.000 real 0m7.238s 00:13:14.000 user 0m6.662s 00:13:14.000 sys 0m0.461s 00:13:14.000 09:41:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.000 09:41:36 -- common/autotest_common.sh@10 -- # set +x 00:13:14.000 09:41:37 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:14.000 09:41:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:14.000 09:41:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.000 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.000 ************************************ 00:13:14.000 START TEST skip_rpc_with_json 00:13:14.000 ************************************ 00:13:14.000 09:41:37 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:13:14.000 09:41:37 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:14.000 09:41:37 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58850 00:13:14.000 09:41:37 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:14.000 09:41:37 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:14.000 09:41:37 -- rpc/skip_rpc.sh@31 -- # waitforlisten 58850 00:13:14.000 09:41:37 -- common/autotest_common.sh@817 -- # '[' -z 58850 ']' 00:13:14.000 09:41:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.000 09:41:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:14.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.000 09:41:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.000 09:41:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:14.000 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.000 [2024-04-18 09:41:37.226267] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:14.000 [2024-04-18 09:41:37.226465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:13:14.258 [2024-04-18 09:41:37.396766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.516 [2024-04-18 09:41:37.677317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.453 09:41:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:15.453 09:41:38 -- common/autotest_common.sh@850 -- # return 0 00:13:15.453 09:41:38 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:15.453 09:41:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.453 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.453 [2024-04-18 09:41:38.484917] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:15.453 request: 00:13:15.453 { 00:13:15.453 "trtype": "tcp", 00:13:15.453 "method": "nvmf_get_transports", 00:13:15.453 "req_id": 1 00:13:15.453 } 00:13:15.453 Got JSON-RPC error response 00:13:15.453 response: 00:13:15.453 { 00:13:15.453 "code": -19, 00:13:15.453 "message": "No such device" 00:13:15.453 } 00:13:15.453 09:41:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:15.453 09:41:38 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:15.453 09:41:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.453 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.453 [2024-04-18 09:41:38.496998] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.453 09:41:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.453 09:41:38 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:15.453 09:41:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.453 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.453 09:41:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.453 09:41:38 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:15.453 { 00:13:15.453 "subsystems": [ 00:13:15.453 { 00:13:15.453 "subsystem": "keyring", 00:13:15.453 "config": [] 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "subsystem": "iobuf", 00:13:15.453 "config": [ 00:13:15.453 { 00:13:15.453 "method": "iobuf_set_options", 00:13:15.453 "params": { 00:13:15.453 "small_pool_count": 8192, 00:13:15.453 "large_pool_count": 1024, 00:13:15.453 "small_bufsize": 8192, 00:13:15.453 "large_bufsize": 135168 00:13:15.453 } 00:13:15.453 } 00:13:15.453 ] 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "subsystem": "sock", 00:13:15.453 "config": [ 00:13:15.453 { 00:13:15.453 "method": "sock_impl_set_options", 00:13:15.453 "params": { 00:13:15.453 "impl_name": "uring", 00:13:15.453 "recv_buf_size": 2097152, 00:13:15.453 "send_buf_size": 2097152, 00:13:15.453 "enable_recv_pipe": true, 00:13:15.453 "enable_quickack": false, 00:13:15.453 "enable_placement_id": 0, 00:13:15.453 "enable_zerocopy_send_server": false, 00:13:15.453 "enable_zerocopy_send_client": false, 00:13:15.453 "zerocopy_threshold": 0, 00:13:15.453 "tls_version": 0, 00:13:15.453 "enable_ktls": false 00:13:15.453 } 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "method": "sock_impl_set_options", 00:13:15.453 "params": { 00:13:15.453 "impl_name": "posix", 00:13:15.453 "recv_buf_size": 2097152, 00:13:15.453 "send_buf_size": 2097152, 00:13:15.453 "enable_recv_pipe": true, 00:13:15.453 "enable_quickack": false, 00:13:15.453 "enable_placement_id": 0, 00:13:15.453 "enable_zerocopy_send_server": true, 00:13:15.453 "enable_zerocopy_send_client": false, 00:13:15.453 "zerocopy_threshold": 0, 00:13:15.453 "tls_version": 0, 00:13:15.453 "enable_ktls": false 00:13:15.453 } 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "method": "sock_impl_set_options", 00:13:15.453 "params": { 00:13:15.453 "impl_name": "ssl", 00:13:15.453 "recv_buf_size": 4096, 00:13:15.453 "send_buf_size": 4096, 00:13:15.453 "enable_recv_pipe": true, 00:13:15.453 "enable_quickack": false, 00:13:15.453 "enable_placement_id": 0, 00:13:15.453 "enable_zerocopy_send_server": true, 00:13:15.453 "enable_zerocopy_send_client": false, 00:13:15.453 "zerocopy_threshold": 0, 00:13:15.453 "tls_version": 0, 00:13:15.453 "enable_ktls": false 00:13:15.453 } 00:13:15.453 } 00:13:15.453 ] 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "subsystem": "vmd", 00:13:15.453 "config": [] 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "subsystem": "accel", 00:13:15.453 "config": [ 00:13:15.453 { 00:13:15.453 "method": "accel_set_options", 00:13:15.453 "params": { 00:13:15.453 "small_cache_size": 128, 00:13:15.453 "large_cache_size": 16, 00:13:15.453 "task_count": 2048, 00:13:15.453 "sequence_count": 2048, 00:13:15.453 "buf_count": 2048 00:13:15.453 } 00:13:15.453 } 00:13:15.453 ] 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "subsystem": "bdev", 00:13:15.453 "config": [ 00:13:15.453 { 00:13:15.453 "method": "bdev_set_options", 00:13:15.453 "params": { 00:13:15.453 "bdev_io_pool_size": 65535, 00:13:15.453 "bdev_io_cache_size": 256, 00:13:15.453 "bdev_auto_examine": true, 00:13:15.453 "iobuf_small_cache_size": 128, 00:13:15.453 "iobuf_large_cache_size": 16 00:13:15.453 } 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "method": "bdev_raid_set_options", 00:13:15.453 "params": { 00:13:15.453 "process_window_size_kb": 1024 00:13:15.453 } 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "method": "bdev_iscsi_set_options", 00:13:15.453 "params": { 00:13:15.453 "timeout_sec": 30 00:13:15.453 } 00:13:15.453 }, 00:13:15.453 { 00:13:15.453 "method": "bdev_nvme_set_options", 00:13:15.453 "params": { 00:13:15.453 "action_on_timeout": "none", 00:13:15.453 "timeout_us": 0, 00:13:15.453 "timeout_admin_us": 0, 00:13:15.453 "keep_alive_timeout_ms": 10000, 00:13:15.453 "arbitration_burst": 0, 00:13:15.453 "low_priority_weight": 0, 00:13:15.453 "medium_priority_weight": 0, 00:13:15.453 "high_priority_weight": 0, 00:13:15.453 "nvme_adminq_poll_period_us": 10000, 00:13:15.453 "nvme_ioq_poll_period_us": 0, 00:13:15.453 "io_queue_requests": 0, 00:13:15.453 "delay_cmd_submit": true, 00:13:15.453 "transport_retry_count": 4, 00:13:15.453 "bdev_retry_count": 3, 00:13:15.453 "transport_ack_timeout": 0, 00:13:15.453 "ctrlr_loss_timeout_sec": 0, 00:13:15.453 "reconnect_delay_sec": 0, 00:13:15.453 "fast_io_fail_timeout_sec": 0, 00:13:15.453 "disable_auto_failback": false, 00:13:15.453 "generate_uuids": false, 00:13:15.453 "transport_tos": 0, 00:13:15.453 "nvme_error_stat": false, 00:13:15.453 "rdma_srq_size": 0, 00:13:15.453 "io_path_stat": false, 00:13:15.453 "allow_accel_sequence": false, 00:13:15.453 "rdma_max_cq_size": 0, 00:13:15.453 "rdma_cm_event_timeout_ms": 0, 00:13:15.453 "dhchap_digests": [ 00:13:15.453 "sha256", 00:13:15.454 "sha384", 00:13:15.454 "sha512" 00:13:15.454 ], 00:13:15.454 "dhchap_dhgroups": [ 00:13:15.454 "null", 00:13:15.454 "ffdhe2048", 00:13:15.454 "ffdhe3072", 00:13:15.454 "ffdhe4096", 00:13:15.454 "ffdhe6144", 00:13:15.454 "ffdhe8192" 00:13:15.454 ] 00:13:15.454 } 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "method": "bdev_nvme_set_hotplug", 00:13:15.454 "params": { 00:13:15.454 "period_us": 100000, 00:13:15.454 "enable": false 00:13:15.454 } 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "method": "bdev_wait_for_examine" 00:13:15.454 } 00:13:15.454 ] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "scsi", 00:13:15.454 "config": null 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "scheduler", 00:13:15.454 "config": [ 00:13:15.454 { 00:13:15.454 "method": "framework_set_scheduler", 00:13:15.454 "params": { 00:13:15.454 "name": "static" 00:13:15.454 } 00:13:15.454 } 00:13:15.454 ] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "vhost_scsi", 00:13:15.454 "config": [] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "vhost_blk", 00:13:15.454 "config": [] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "ublk", 00:13:15.454 "config": [] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "nbd", 00:13:15.454 "config": [] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "nvmf", 00:13:15.454 "config": [ 00:13:15.454 { 00:13:15.454 "method": "nvmf_set_config", 00:13:15.454 "params": { 00:13:15.454 "discovery_filter": "match_any", 00:13:15.454 "admin_cmd_passthru": { 00:13:15.454 "identify_ctrlr": false 00:13:15.454 } 00:13:15.454 } 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "method": "nvmf_set_max_subsystems", 00:13:15.454 "params": { 00:13:15.454 "max_subsystems": 1024 00:13:15.454 } 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "method": "nvmf_set_crdt", 00:13:15.454 "params": { 00:13:15.454 "crdt1": 0, 00:13:15.454 "crdt2": 0, 00:13:15.454 "crdt3": 0 00:13:15.454 } 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "method": "nvmf_create_transport", 00:13:15.454 "params": { 00:13:15.454 "trtype": "TCP", 00:13:15.454 "max_queue_depth": 128, 00:13:15.454 "max_io_qpairs_per_ctrlr": 127, 00:13:15.454 "in_capsule_data_size": 4096, 00:13:15.454 "max_io_size": 131072, 00:13:15.454 "io_unit_size": 131072, 00:13:15.454 "max_aq_depth": 128, 00:13:15.454 "num_shared_buffers": 511, 00:13:15.454 "buf_cache_size": 4294967295, 00:13:15.454 "dif_insert_or_strip": false, 00:13:15.454 "zcopy": false, 00:13:15.454 "c2h_success": true, 00:13:15.454 "sock_priority": 0, 00:13:15.454 "abort_timeout_sec": 1, 00:13:15.454 "ack_timeout": 0 00:13:15.454 } 00:13:15.454 } 00:13:15.454 ] 00:13:15.454 }, 00:13:15.454 { 00:13:15.454 "subsystem": "iscsi", 00:13:15.454 "config": [ 00:13:15.454 { 00:13:15.454 "method": "iscsi_set_options", 00:13:15.454 "params": { 00:13:15.454 "node_base": "iqn.2016-06.io.spdk", 00:13:15.454 "max_sessions": 128, 00:13:15.454 "max_connections_per_session": 2, 00:13:15.454 "max_queue_depth": 64, 00:13:15.454 "default_time2wait": 2, 00:13:15.454 "default_time2retain": 20, 00:13:15.454 "first_burst_length": 8192, 00:13:15.454 "immediate_data": true, 00:13:15.454 "allow_duplicated_isid": false, 00:13:15.454 "error_recovery_level": 0, 00:13:15.454 "nop_timeout": 60, 00:13:15.454 "nop_in_interval": 30, 00:13:15.454 "disable_chap": false, 00:13:15.454 "require_chap": false, 00:13:15.454 "mutual_chap": false, 00:13:15.454 "chap_group": 0, 00:13:15.454 "max_large_datain_per_connection": 64, 00:13:15.454 "max_r2t_per_connection": 4, 00:13:15.454 "pdu_pool_size": 36864, 00:13:15.454 "immediate_data_pool_size": 16384, 00:13:15.454 "data_out_pool_size": 2048 00:13:15.454 } 00:13:15.454 } 00:13:15.454 ] 00:13:15.454 } 00:13:15.454 ] 00:13:15.454 } 00:13:15.454 09:41:38 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:15.454 09:41:38 -- rpc/skip_rpc.sh@40 -- # killprocess 58850 00:13:15.454 09:41:38 -- common/autotest_common.sh@936 -- # '[' -z 58850 ']' 00:13:15.454 09:41:38 -- common/autotest_common.sh@940 -- # kill -0 58850 00:13:15.454 09:41:38 -- common/autotest_common.sh@941 -- # uname 00:13:15.454 09:41:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.454 09:41:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58850 00:13:15.454 killing process with pid 58850 00:13:15.454 09:41:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.454 09:41:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.454 09:41:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58850' 00:13:15.454 09:41:38 -- common/autotest_common.sh@955 -- # kill 58850 00:13:15.454 09:41:38 -- common/autotest_common.sh@960 -- # wait 58850 00:13:17.985 09:41:40 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58906 00:13:17.985 09:41:40 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:17.985 09:41:40 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:23.252 09:41:45 -- rpc/skip_rpc.sh@50 -- # killprocess 58906 00:13:23.252 09:41:45 -- common/autotest_common.sh@936 -- # '[' -z 58906 ']' 00:13:23.252 09:41:45 -- common/autotest_common.sh@940 -- # kill -0 58906 00:13:23.252 09:41:45 -- common/autotest_common.sh@941 -- # uname 00:13:23.252 09:41:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.252 09:41:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58906 00:13:23.252 killing process with pid 58906 00:13:23.252 09:41:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.252 09:41:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.252 09:41:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58906' 00:13:23.252 09:41:45 -- common/autotest_common.sh@955 -- # kill 58906 00:13:23.252 09:41:45 -- common/autotest_common.sh@960 -- # wait 58906 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:25.155 ************************************ 00:13:25.155 END TEST skip_rpc_with_json 00:13:25.155 ************************************ 00:13:25.155 00:13:25.155 real 0m10.982s 00:13:25.155 user 0m10.318s 00:13:25.155 sys 0m0.995s 00:13:25.155 09:41:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.155 09:41:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:25.155 09:41:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.155 09:41:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.155 09:41:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 ************************************ 00:13:25.155 START TEST skip_rpc_with_delay 00:13:25.155 ************************************ 00:13:25.155 09:41:48 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:25.155 09:41:48 -- common/autotest_common.sh@638 -- # local es=0 00:13:25.155 09:41:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:25.155 09:41:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.155 09:41:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:25.155 09:41:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.155 09:41:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:25.155 09:41:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.155 09:41:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:25.155 09:41:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.155 09:41:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:25.155 09:41:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:25.155 [2024-04-18 09:41:48.343972] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:25.155 [2024-04-18 09:41:48.344218] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:13:25.155 09:41:48 -- common/autotest_common.sh@641 -- # es=1 00:13:25.155 09:41:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:25.155 09:41:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:25.155 09:41:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:25.155 00:13:25.155 real 0m0.208s 00:13:25.155 user 0m0.115s 00:13:25.155 sys 0m0.090s 00:13:25.155 ************************************ 00:13:25.155 END TEST skip_rpc_with_delay 00:13:25.155 ************************************ 00:13:25.155 09:41:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.155 09:41:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@77 -- # uname 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:25.155 09:41:48 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:25.155 09:41:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.155 09:41:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.155 09:41:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.414 ************************************ 00:13:25.414 START TEST exit_on_failed_rpc_init 00:13:25.414 ************************************ 00:13:25.414 09:41:48 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:13:25.414 09:41:48 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59042 00:13:25.414 09:41:48 -- rpc/skip_rpc.sh@63 -- # waitforlisten 59042 00:13:25.414 09:41:48 -- common/autotest_common.sh@817 -- # '[' -z 59042 ']' 00:13:25.414 09:41:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.414 09:41:48 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:25.414 09:41:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:25.414 09:41:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.414 09:41:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:25.414 09:41:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.414 [2024-04-18 09:41:48.703180] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:25.414 [2024-04-18 09:41:48.703919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:13:25.674 [2024-04-18 09:41:48.889394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.933 [2024-04-18 09:41:49.117848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.869 09:41:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:26.869 09:41:49 -- common/autotest_common.sh@850 -- # return 0 00:13:26.869 09:41:49 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:26.869 09:41:49 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:26.869 09:41:49 -- common/autotest_common.sh@638 -- # local es=0 00:13:26.869 09:41:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:26.869 09:41:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:26.869 09:41:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.869 09:41:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:26.869 09:41:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.869 09:41:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:26.869 09:41:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.869 09:41:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:26.869 09:41:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:26.869 09:41:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:26.869 [2024-04-18 09:41:50.063982] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:26.869 [2024-04-18 09:41:50.064218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:13:26.869 [2024-04-18 09:41:50.244026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.127 [2024-04-18 09:41:50.512890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.127 [2024-04-18 09:41:50.513044] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:27.127 [2024-04-18 09:41:50.513076] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:27.127 [2024-04-18 09:41:50.513102] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:27.694 09:41:50 -- common/autotest_common.sh@641 -- # es=234 00:13:27.694 09:41:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:27.694 09:41:50 -- common/autotest_common.sh@650 -- # es=106 00:13:27.694 09:41:50 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:27.694 09:41:50 -- common/autotest_common.sh@658 -- # es=1 00:13:27.694 09:41:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:27.694 09:41:50 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:27.694 09:41:50 -- rpc/skip_rpc.sh@70 -- # killprocess 59042 00:13:27.694 09:41:50 -- common/autotest_common.sh@936 -- # '[' -z 59042 ']' 00:13:27.694 09:41:50 -- common/autotest_common.sh@940 -- # kill -0 59042 00:13:27.694 09:41:50 -- common/autotest_common.sh@941 -- # uname 00:13:27.694 09:41:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.694 09:41:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59042 00:13:27.694 killing process with pid 59042 00:13:27.694 09:41:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:27.694 09:41:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:27.694 09:41:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59042' 00:13:27.694 09:41:50 -- common/autotest_common.sh@955 -- # kill 59042 00:13:27.694 09:41:50 -- common/autotest_common.sh@960 -- # wait 59042 00:13:30.228 ************************************ 00:13:30.228 END TEST exit_on_failed_rpc_init 00:13:30.228 ************************************ 00:13:30.228 00:13:30.228 real 0m4.590s 00:13:30.228 user 0m5.123s 00:13:30.228 sys 0m0.721s 00:13:30.228 09:41:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:30.228 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 09:41:53 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:30.228 00:13:30.228 real 0m23.621s 00:13:30.228 user 0m22.430s 00:13:30.228 sys 0m2.594s 00:13:30.228 09:41:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:30.228 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 ************************************ 00:13:30.228 END TEST skip_rpc 00:13:30.228 ************************************ 00:13:30.228 09:41:53 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:30.228 09:41:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:30.228 09:41:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.228 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 ************************************ 00:13:30.228 START TEST rpc_client 00:13:30.228 ************************************ 00:13:30.228 09:41:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:30.228 * Looking for test storage... 00:13:30.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:13:30.228 09:41:53 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:13:30.228 OK 00:13:30.228 09:41:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:30.228 00:13:30.228 real 0m0.165s 00:13:30.228 user 0m0.077s 00:13:30.228 sys 0m0.092s 00:13:30.228 09:41:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:30.228 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 ************************************ 00:13:30.228 END TEST rpc_client 00:13:30.228 ************************************ 00:13:30.228 09:41:53 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:30.228 09:41:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:30.228 09:41:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.228 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.487 ************************************ 00:13:30.487 START TEST json_config 00:13:30.487 ************************************ 00:13:30.487 09:41:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:30.487 09:41:53 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.487 09:41:53 -- nvmf/common.sh@7 -- # uname -s 00:13:30.487 09:41:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.487 09:41:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.487 09:41:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.487 09:41:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.487 09:41:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.487 09:41:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.487 09:41:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.487 09:41:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.487 09:41:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.487 09:41:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.487 09:41:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:13:30.487 09:41:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:13:30.487 09:41:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.487 09:41:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.487 09:41:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:30.487 09:41:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.487 09:41:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.487 09:41:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.487 09:41:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.487 09:41:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.487 09:41:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.487 09:41:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.487 09:41:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.487 09:41:53 -- paths/export.sh@5 -- # export PATH 00:13:30.487 09:41:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.487 09:41:53 -- nvmf/common.sh@47 -- # : 0 00:13:30.487 09:41:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.487 09:41:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.487 09:41:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.487 09:41:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.487 09:41:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.487 09:41:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.487 09:41:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.487 09:41:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.487 09:41:53 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:30.487 09:41:53 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:30.487 09:41:53 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:30.487 09:41:53 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:30.487 09:41:53 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:30.487 09:41:53 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:13:30.487 09:41:53 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:13:30.487 09:41:53 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:13:30.487 09:41:53 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:13:30.487 09:41:53 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:13:30.487 09:41:53 -- json_config/json_config.sh@33 -- # declare -A app_params 00:13:30.487 09:41:53 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:13:30.487 09:41:53 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:13:30.487 09:41:53 -- json_config/json_config.sh@40 -- # last_event_id=0 00:13:30.487 INFO: JSON configuration test init 00:13:30.487 09:41:53 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:30.487 09:41:53 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:13:30.487 09:41:53 -- json_config/json_config.sh@357 -- # json_config_test_init 00:13:30.487 09:41:53 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:13:30.487 09:41:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:30.487 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.487 09:41:53 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:13:30.487 09:41:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:30.487 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.487 09:41:53 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:13:30.487 09:41:53 -- json_config/common.sh@9 -- # local app=target 00:13:30.487 09:41:53 -- json_config/common.sh@10 -- # shift 00:13:30.487 09:41:53 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:30.487 09:41:53 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:30.487 09:41:53 -- json_config/common.sh@15 -- # local app_extra_params= 00:13:30.487 09:41:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:30.487 09:41:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:30.487 Waiting for target to run... 00:13:30.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:30.487 09:41:53 -- json_config/common.sh@22 -- # app_pid["$app"]=59230 00:13:30.487 09:41:53 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:30.487 09:41:53 -- json_config/common.sh@25 -- # waitforlisten 59230 /var/tmp/spdk_tgt.sock 00:13:30.487 09:41:53 -- common/autotest_common.sh@817 -- # '[' -z 59230 ']' 00:13:30.487 09:41:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:30.487 09:41:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:30.487 09:41:53 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:13:30.487 09:41:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:30.487 09:41:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:30.487 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.487 [2024-04-18 09:41:53.851979] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:30.487 [2024-04-18 09:41:53.852493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:13:31.055 [2024-04-18 09:41:54.435164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.315 [2024-04-18 09:41:54.661861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.574 00:13:31.574 09:41:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:31.574 09:41:54 -- common/autotest_common.sh@850 -- # return 0 00:13:31.574 09:41:54 -- json_config/common.sh@26 -- # echo '' 00:13:31.574 09:41:54 -- json_config/json_config.sh@269 -- # create_accel_config 00:13:31.574 09:41:54 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:13:31.574 09:41:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:31.574 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.574 09:41:54 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:13:31.574 09:41:54 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:13:31.574 09:41:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:31.574 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.574 09:41:54 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:13:31.574 09:41:54 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:13:31.574 09:41:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:13:32.511 09:41:55 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:13:32.511 09:41:55 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:13:32.511 09:41:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:32.511 09:41:55 -- common/autotest_common.sh@10 -- # set +x 00:13:32.511 09:41:55 -- json_config/json_config.sh@45 -- # local ret=0 00:13:32.511 09:41:55 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:13:32.511 09:41:55 -- json_config/json_config.sh@46 -- # local enabled_types 00:13:32.511 09:41:55 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:13:32.511 09:41:55 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:13:32.511 09:41:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:13:33.080 09:41:56 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:13:33.080 09:41:56 -- json_config/json_config.sh@48 -- # local get_types 00:13:33.080 09:41:56 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:13:33.080 09:41:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:33.080 09:41:56 -- common/autotest_common.sh@10 -- # set +x 00:13:33.080 09:41:56 -- json_config/json_config.sh@55 -- # return 0 00:13:33.080 09:41:56 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:13:33.080 09:41:56 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:13:33.080 09:41:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:33.080 09:41:56 -- common/autotest_common.sh@10 -- # set +x 00:13:33.080 09:41:56 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:13:33.080 09:41:56 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:13:33.080 09:41:56 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:33.080 09:41:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:33.339 MallocForNvmf0 00:13:33.339 09:41:56 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:33.339 09:41:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:33.598 MallocForNvmf1 00:13:33.598 09:41:56 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:13:33.598 09:41:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:13:33.856 [2024-04-18 09:41:57.074385] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.856 09:41:57 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:33.856 09:41:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:34.115 09:41:57 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:34.115 09:41:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:34.374 09:41:57 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:34.374 09:41:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:34.632 09:41:57 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:34.632 09:41:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:34.891 [2024-04-18 09:41:58.171217] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:34.891 09:41:58 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:13:34.891 09:41:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:34.891 09:41:58 -- common/autotest_common.sh@10 -- # set +x 00:13:34.891 09:41:58 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:13:34.891 09:41:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:34.891 09:41:58 -- common/autotest_common.sh@10 -- # set +x 00:13:34.891 09:41:58 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:13:34.891 09:41:58 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:34.892 09:41:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:35.150 MallocBdevForConfigChangeCheck 00:13:35.150 09:41:58 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:13:35.150 09:41:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:35.150 09:41:58 -- common/autotest_common.sh@10 -- # set +x 00:13:35.409 09:41:58 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:13:35.409 09:41:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:35.667 INFO: shutting down applications... 00:13:35.667 09:41:58 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:13:35.667 09:41:58 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:13:35.667 09:41:58 -- json_config/json_config.sh@368 -- # json_config_clear target 00:13:35.667 09:41:58 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:13:35.667 09:41:58 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:13:35.926 Calling clear_iscsi_subsystem 00:13:35.926 Calling clear_nvmf_subsystem 00:13:35.926 Calling clear_nbd_subsystem 00:13:35.926 Calling clear_ublk_subsystem 00:13:35.926 Calling clear_vhost_blk_subsystem 00:13:35.926 Calling clear_vhost_scsi_subsystem 00:13:35.926 Calling clear_bdev_subsystem 00:13:36.185 09:41:59 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:13:36.185 09:41:59 -- json_config/json_config.sh@343 -- # count=100 00:13:36.185 09:41:59 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:13:36.185 09:41:59 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:13:36.185 09:41:59 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:36.185 09:41:59 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:13:36.445 09:41:59 -- json_config/json_config.sh@345 -- # break 00:13:36.445 09:41:59 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:13:36.445 09:41:59 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:13:36.445 09:41:59 -- json_config/common.sh@31 -- # local app=target 00:13:36.445 09:41:59 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:36.445 09:41:59 -- json_config/common.sh@35 -- # [[ -n 59230 ]] 00:13:36.445 09:41:59 -- json_config/common.sh@38 -- # kill -SIGINT 59230 00:13:36.445 09:41:59 -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:36.445 09:41:59 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:36.445 09:41:59 -- json_config/common.sh@41 -- # kill -0 59230 00:13:36.445 09:41:59 -- json_config/common.sh@45 -- # sleep 0.5 00:13:37.010 09:42:00 -- json_config/common.sh@40 -- # (( i++ )) 00:13:37.010 09:42:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:37.010 09:42:00 -- json_config/common.sh@41 -- # kill -0 59230 00:13:37.010 09:42:00 -- json_config/common.sh@45 -- # sleep 0.5 00:13:37.578 09:42:00 -- json_config/common.sh@40 -- # (( i++ )) 00:13:37.579 09:42:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:37.579 09:42:00 -- json_config/common.sh@41 -- # kill -0 59230 00:13:37.579 09:42:00 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:37.579 09:42:00 -- json_config/common.sh@43 -- # break 00:13:37.579 09:42:00 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:37.579 09:42:00 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:37.579 SPDK target shutdown done 00:13:37.579 09:42:00 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:13:37.579 INFO: relaunching applications... 00:13:37.579 09:42:00 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:37.579 09:42:00 -- json_config/common.sh@9 -- # local app=target 00:13:37.579 09:42:00 -- json_config/common.sh@10 -- # shift 00:13:37.579 09:42:00 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:37.579 09:42:00 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:37.579 09:42:00 -- json_config/common.sh@15 -- # local app_extra_params= 00:13:37.579 09:42:00 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:37.579 09:42:00 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:37.579 09:42:00 -- json_config/common.sh@22 -- # app_pid["$app"]=59433 00:13:37.579 09:42:00 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:37.579 Waiting for target to run... 00:13:37.579 09:42:00 -- json_config/common.sh@25 -- # waitforlisten 59433 /var/tmp/spdk_tgt.sock 00:13:37.579 09:42:00 -- common/autotest_common.sh@817 -- # '[' -z 59433 ']' 00:13:37.579 09:42:00 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:37.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:37.579 09:42:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:37.579 09:42:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:37.579 09:42:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:37.579 09:42:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:37.579 09:42:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.579 [2024-04-18 09:42:00.904983] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:37.579 [2024-04-18 09:42:00.905186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59433 ] 00:13:38.147 [2024-04-18 09:42:01.373731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.405 [2024-04-18 09:42:01.621471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.342 [2024-04-18 09:42:02.537991] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.342 [2024-04-18 09:42:02.570733] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:39.342 09:42:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.342 09:42:02 -- common/autotest_common.sh@850 -- # return 0 00:13:39.342 00:13:39.342 09:42:02 -- json_config/common.sh@26 -- # echo '' 00:13:39.342 09:42:02 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:13:39.342 INFO: Checking if target configuration is the same... 00:13:39.342 09:42:02 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:13:39.343 09:42:02 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:39.343 09:42:02 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:13:39.343 09:42:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:39.343 + '[' 2 -ne 2 ']' 00:13:39.343 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:13:39.343 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:13:39.343 + rootdir=/home/vagrant/spdk_repo/spdk 00:13:39.343 +++ basename /dev/fd/62 00:13:39.343 ++ mktemp /tmp/62.XXX 00:13:39.343 + tmp_file_1=/tmp/62.PWo 00:13:39.343 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:39.343 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:13:39.343 + tmp_file_2=/tmp/spdk_tgt_config.json.ePZ 00:13:39.343 + ret=0 00:13:39.343 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:39.909 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:39.909 + diff -u /tmp/62.PWo /tmp/spdk_tgt_config.json.ePZ 00:13:39.909 INFO: JSON config files are the same 00:13:39.909 + echo 'INFO: JSON config files are the same' 00:13:39.909 + rm /tmp/62.PWo /tmp/spdk_tgt_config.json.ePZ 00:13:39.909 + exit 0 00:13:39.909 INFO: changing configuration and checking if this can be detected... 00:13:39.909 09:42:03 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:13:39.909 09:42:03 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:13:39.909 09:42:03 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:13:39.909 09:42:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:13:40.168 09:42:03 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:13:40.168 09:42:03 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:40.168 09:42:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:40.168 + '[' 2 -ne 2 ']' 00:13:40.168 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:13:40.168 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:13:40.168 + rootdir=/home/vagrant/spdk_repo/spdk 00:13:40.168 +++ basename /dev/fd/62 00:13:40.168 ++ mktemp /tmp/62.XXX 00:13:40.168 + tmp_file_1=/tmp/62.eBF 00:13:40.168 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:40.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:13:40.168 + tmp_file_2=/tmp/spdk_tgt_config.json.Hg3 00:13:40.168 + ret=0 00:13:40.168 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:40.428 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:40.428 + diff -u /tmp/62.eBF /tmp/spdk_tgt_config.json.Hg3 00:13:40.428 + ret=1 00:13:40.428 + echo '=== Start of file: /tmp/62.eBF ===' 00:13:40.428 + cat /tmp/62.eBF 00:13:40.428 + echo '=== End of file: /tmp/62.eBF ===' 00:13:40.428 + echo '' 00:13:40.428 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Hg3 ===' 00:13:40.428 + cat /tmp/spdk_tgt_config.json.Hg3 00:13:40.428 + echo '=== End of file: /tmp/spdk_tgt_config.json.Hg3 ===' 00:13:40.428 + echo '' 00:13:40.428 + rm /tmp/62.eBF /tmp/spdk_tgt_config.json.Hg3 00:13:40.428 + exit 1 00:13:40.428 INFO: configuration change detected. 00:13:40.428 09:42:03 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:13:40.428 09:42:03 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:13:40.428 09:42:03 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:13:40.428 09:42:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.428 09:42:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.428 09:42:03 -- json_config/json_config.sh@307 -- # local ret=0 00:13:40.428 09:42:03 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:13:40.428 09:42:03 -- json_config/json_config.sh@317 -- # [[ -n 59433 ]] 00:13:40.428 09:42:03 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:13:40.428 09:42:03 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:13:40.428 09:42:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.428 09:42:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.428 09:42:03 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:13:40.428 09:42:03 -- json_config/json_config.sh@193 -- # uname -s 00:13:40.428 09:42:03 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:13:40.428 09:42:03 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:13:40.428 09:42:03 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:13:40.428 09:42:03 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:13:40.428 09:42:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:40.428 09:42:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.688 09:42:03 -- json_config/json_config.sh@323 -- # killprocess 59433 00:13:40.688 09:42:03 -- common/autotest_common.sh@936 -- # '[' -z 59433 ']' 00:13:40.688 09:42:03 -- common/autotest_common.sh@940 -- # kill -0 59433 00:13:40.688 09:42:03 -- common/autotest_common.sh@941 -- # uname 00:13:40.688 09:42:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.688 09:42:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59433 00:13:40.688 killing process with pid 59433 00:13:40.688 09:42:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:40.688 09:42:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:40.688 09:42:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59433' 00:13:40.688 09:42:03 -- common/autotest_common.sh@955 -- # kill 59433 00:13:40.688 09:42:03 -- common/autotest_common.sh@960 -- # wait 59433 00:13:41.625 09:42:04 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:41.625 09:42:04 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:13:41.625 09:42:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.625 09:42:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.625 INFO: Success 00:13:41.625 09:42:04 -- json_config/json_config.sh@328 -- # return 0 00:13:41.625 09:42:04 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:13:41.625 00:13:41.625 real 0m11.303s 00:13:41.625 user 0m14.811s 00:13:41.625 sys 0m2.359s 00:13:41.625 09:42:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.625 09:42:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.625 ************************************ 00:13:41.626 END TEST json_config 00:13:41.626 ************************************ 00:13:41.626 09:42:04 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:41.626 09:42:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.626 09:42:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.626 09:42:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.885 ************************************ 00:13:41.885 START TEST json_config_extra_key 00:13:41.885 ************************************ 00:13:41.885 09:42:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.885 09:42:05 -- nvmf/common.sh@7 -- # uname -s 00:13:41.885 09:42:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.885 09:42:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.885 09:42:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.885 09:42:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.885 09:42:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.885 09:42:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.885 09:42:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.885 09:42:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.885 09:42:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.885 09:42:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.885 09:42:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:13:41.885 09:42:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:13:41.885 09:42:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.885 09:42:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.885 09:42:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:41.885 09:42:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.885 09:42:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.885 09:42:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.885 09:42:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.885 09:42:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.885 09:42:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.885 09:42:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.885 09:42:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.885 09:42:05 -- paths/export.sh@5 -- # export PATH 00:13:41.885 09:42:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.885 09:42:05 -- nvmf/common.sh@47 -- # : 0 00:13:41.885 09:42:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.885 09:42:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.885 09:42:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.885 09:42:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.885 09:42:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.885 09:42:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.885 09:42:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.885 09:42:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:13:41.885 09:42:05 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:13:41.886 09:42:05 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:13:41.886 09:42:05 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:13:41.886 09:42:05 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:41.886 INFO: launching applications... 00:13:41.886 09:42:05 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:13:41.886 09:42:05 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:41.886 09:42:05 -- json_config/common.sh@9 -- # local app=target 00:13:41.886 09:42:05 -- json_config/common.sh@10 -- # shift 00:13:41.886 Waiting for target to run... 00:13:41.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:41.886 09:42:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:41.886 09:42:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:41.886 09:42:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:13:41.886 09:42:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:41.886 09:42:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:41.886 09:42:05 -- json_config/common.sh@22 -- # app_pid["$app"]=59602 00:13:41.886 09:42:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:41.886 09:42:05 -- json_config/common.sh@25 -- # waitforlisten 59602 /var/tmp/spdk_tgt.sock 00:13:41.886 09:42:05 -- common/autotest_common.sh@817 -- # '[' -z 59602 ']' 00:13:41.886 09:42:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:41.886 09:42:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.886 09:42:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:41.886 09:42:05 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:41.886 09:42:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.886 09:42:05 -- common/autotest_common.sh@10 -- # set +x 00:13:41.886 [2024-04-18 09:42:05.272311] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:41.886 [2024-04-18 09:42:05.272569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59602 ] 00:13:42.454 [2024-04-18 09:42:05.830077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.713 [2024-04-18 09:42:06.052453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.648 00:13:43.648 INFO: shutting down applications... 00:13:43.648 09:42:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:43.648 09:42:06 -- common/autotest_common.sh@850 -- # return 0 00:13:43.648 09:42:06 -- json_config/common.sh@26 -- # echo '' 00:13:43.648 09:42:06 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:13:43.648 09:42:06 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:13:43.648 09:42:06 -- json_config/common.sh@31 -- # local app=target 00:13:43.648 09:42:06 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:43.648 09:42:06 -- json_config/common.sh@35 -- # [[ -n 59602 ]] 00:13:43.648 09:42:06 -- json_config/common.sh@38 -- # kill -SIGINT 59602 00:13:43.648 09:42:06 -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:43.648 09:42:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:43.648 09:42:06 -- json_config/common.sh@41 -- # kill -0 59602 00:13:43.648 09:42:06 -- json_config/common.sh@45 -- # sleep 0.5 00:13:43.906 09:42:07 -- json_config/common.sh@40 -- # (( i++ )) 00:13:43.906 09:42:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:43.906 09:42:07 -- json_config/common.sh@41 -- # kill -0 59602 00:13:43.906 09:42:07 -- json_config/common.sh@45 -- # sleep 0.5 00:13:44.473 09:42:07 -- json_config/common.sh@40 -- # (( i++ )) 00:13:44.473 09:42:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:44.473 09:42:07 -- json_config/common.sh@41 -- # kill -0 59602 00:13:44.473 09:42:07 -- json_config/common.sh@45 -- # sleep 0.5 00:13:45.041 09:42:08 -- json_config/common.sh@40 -- # (( i++ )) 00:13:45.041 09:42:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:45.041 09:42:08 -- json_config/common.sh@41 -- # kill -0 59602 00:13:45.041 09:42:08 -- json_config/common.sh@45 -- # sleep 0.5 00:13:45.609 09:42:08 -- json_config/common.sh@40 -- # (( i++ )) 00:13:45.609 09:42:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:45.609 09:42:08 -- json_config/common.sh@41 -- # kill -0 59602 00:13:45.609 09:42:08 -- json_config/common.sh@45 -- # sleep 0.5 00:13:45.869 09:42:09 -- json_config/common.sh@40 -- # (( i++ )) 00:13:45.869 09:42:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:45.869 09:42:09 -- json_config/common.sh@41 -- # kill -0 59602 00:13:45.869 09:42:09 -- json_config/common.sh@45 -- # sleep 0.5 00:13:46.437 09:42:09 -- json_config/common.sh@40 -- # (( i++ )) 00:13:46.437 09:42:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:46.437 09:42:09 -- json_config/common.sh@41 -- # kill -0 59602 00:13:46.437 09:42:09 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:46.437 09:42:09 -- json_config/common.sh@43 -- # break 00:13:46.437 SPDK target shutdown done 00:13:46.437 09:42:09 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:46.437 09:42:09 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:46.437 Success 00:13:46.437 09:42:09 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:13:46.437 ************************************ 00:13:46.437 END TEST json_config_extra_key 00:13:46.437 ************************************ 00:13:46.437 00:13:46.437 real 0m4.694s 00:13:46.437 user 0m4.115s 00:13:46.437 sys 0m0.748s 00:13:46.437 09:42:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.437 09:42:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.437 09:42:09 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:46.437 09:42:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:46.437 09:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.437 09:42:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.696 ************************************ 00:13:46.696 START TEST alias_rpc 00:13:46.696 ************************************ 00:13:46.696 09:42:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:46.696 * Looking for test storage... 00:13:46.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:13:46.696 09:42:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:46.696 09:42:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59710 00:13:46.696 09:42:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59710 00:13:46.696 09:42:09 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:46.696 09:42:09 -- common/autotest_common.sh@817 -- # '[' -z 59710 ']' 00:13:46.696 09:42:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.696 09:42:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:46.696 09:42:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.696 09:42:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:46.696 09:42:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.954 [2024-04-18 09:42:10.091170] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:46.954 [2024-04-18 09:42:10.092530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59710 ] 00:13:46.954 [2024-04-18 09:42:10.273422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.212 [2024-04-18 09:42:10.533454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.147 09:42:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.147 09:42:11 -- common/autotest_common.sh@850 -- # return 0 00:13:48.147 09:42:11 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:13:48.406 09:42:11 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59710 00:13:48.406 09:42:11 -- common/autotest_common.sh@936 -- # '[' -z 59710 ']' 00:13:48.406 09:42:11 -- common/autotest_common.sh@940 -- # kill -0 59710 00:13:48.406 09:42:11 -- common/autotest_common.sh@941 -- # uname 00:13:48.406 09:42:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.406 09:42:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59710 00:13:48.406 killing process with pid 59710 00:13:48.406 09:42:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:48.406 09:42:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:48.406 09:42:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59710' 00:13:48.406 09:42:11 -- common/autotest_common.sh@955 -- # kill 59710 00:13:48.406 09:42:11 -- common/autotest_common.sh@960 -- # wait 59710 00:13:50.939 ************************************ 00:13:50.939 END TEST alias_rpc 00:13:50.939 ************************************ 00:13:50.939 00:13:50.939 real 0m4.086s 00:13:50.939 user 0m4.130s 00:13:50.939 sys 0m0.659s 00:13:50.939 09:42:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:50.939 09:42:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.939 09:42:13 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:13:50.939 09:42:13 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:50.939 09:42:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:50.939 09:42:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.939 09:42:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.939 ************************************ 00:13:50.939 START TEST spdkcli_tcp 00:13:50.939 ************************************ 00:13:50.939 09:42:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:50.939 * Looking for test storage... 00:13:50.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:50.939 09:42:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:50.939 09:42:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:13:50.939 09:42:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:50.939 09:42:14 -- common/autotest_common.sh@10 -- # set +x 00:13:50.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59820 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 59820 00:13:50.939 09:42:14 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:13:50.939 09:42:14 -- common/autotest_common.sh@817 -- # '[' -z 59820 ']' 00:13:50.939 09:42:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.939 09:42:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:50.939 09:42:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.939 09:42:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:50.939 09:42:14 -- common/autotest_common.sh@10 -- # set +x 00:13:50.939 [2024-04-18 09:42:14.315826] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:50.939 [2024-04-18 09:42:14.316018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:13:51.198 [2024-04-18 09:42:14.491772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.457 [2024-04-18 09:42:14.753592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.457 [2024-04-18 09:42:14.753604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.393 09:42:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:52.393 09:42:15 -- common/autotest_common.sh@850 -- # return 0 00:13:52.393 09:42:15 -- spdkcli/tcp.sh@31 -- # socat_pid=59837 00:13:52.393 09:42:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:13:52.393 09:42:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:13:52.653 [ 00:13:52.653 "bdev_malloc_delete", 00:13:52.653 "bdev_malloc_create", 00:13:52.653 "bdev_null_resize", 00:13:52.653 "bdev_null_delete", 00:13:52.653 "bdev_null_create", 00:13:52.653 "bdev_nvme_cuse_unregister", 00:13:52.653 "bdev_nvme_cuse_register", 00:13:52.653 "bdev_opal_new_user", 00:13:52.653 "bdev_opal_set_lock_state", 00:13:52.653 "bdev_opal_delete", 00:13:52.653 "bdev_opal_get_info", 00:13:52.653 "bdev_opal_create", 00:13:52.653 "bdev_nvme_opal_revert", 00:13:52.653 "bdev_nvme_opal_init", 00:13:52.653 "bdev_nvme_send_cmd", 00:13:52.653 "bdev_nvme_get_path_iostat", 00:13:52.653 "bdev_nvme_get_mdns_discovery_info", 00:13:52.653 "bdev_nvme_stop_mdns_discovery", 00:13:52.653 "bdev_nvme_start_mdns_discovery", 00:13:52.653 "bdev_nvme_set_multipath_policy", 00:13:52.653 "bdev_nvme_set_preferred_path", 00:13:52.653 "bdev_nvme_get_io_paths", 00:13:52.653 "bdev_nvme_remove_error_injection", 00:13:52.653 "bdev_nvme_add_error_injection", 00:13:52.653 "bdev_nvme_get_discovery_info", 00:13:52.653 "bdev_nvme_stop_discovery", 00:13:52.653 "bdev_nvme_start_discovery", 00:13:52.653 "bdev_nvme_get_controller_health_info", 00:13:52.653 "bdev_nvme_disable_controller", 00:13:52.653 "bdev_nvme_enable_controller", 00:13:52.653 "bdev_nvme_reset_controller", 00:13:52.653 "bdev_nvme_get_transport_statistics", 00:13:52.653 "bdev_nvme_apply_firmware", 00:13:52.653 "bdev_nvme_detach_controller", 00:13:52.653 "bdev_nvme_get_controllers", 00:13:52.653 "bdev_nvme_attach_controller", 00:13:52.653 "bdev_nvme_set_hotplug", 00:13:52.653 "bdev_nvme_set_options", 00:13:52.653 "bdev_passthru_delete", 00:13:52.653 "bdev_passthru_create", 00:13:52.653 "bdev_lvol_grow_lvstore", 00:13:52.653 "bdev_lvol_get_lvols", 00:13:52.653 "bdev_lvol_get_lvstores", 00:13:52.653 "bdev_lvol_delete", 00:13:52.653 "bdev_lvol_set_read_only", 00:13:52.653 "bdev_lvol_resize", 00:13:52.653 "bdev_lvol_decouple_parent", 00:13:52.653 "bdev_lvol_inflate", 00:13:52.653 "bdev_lvol_rename", 00:13:52.653 "bdev_lvol_clone_bdev", 00:13:52.653 "bdev_lvol_clone", 00:13:52.653 "bdev_lvol_snapshot", 00:13:52.653 "bdev_lvol_create", 00:13:52.653 "bdev_lvol_delete_lvstore", 00:13:52.653 "bdev_lvol_rename_lvstore", 00:13:52.653 "bdev_lvol_create_lvstore", 00:13:52.653 "bdev_raid_set_options", 00:13:52.653 "bdev_raid_remove_base_bdev", 00:13:52.653 "bdev_raid_add_base_bdev", 00:13:52.653 "bdev_raid_delete", 00:13:52.653 "bdev_raid_create", 00:13:52.653 "bdev_raid_get_bdevs", 00:13:52.653 "bdev_error_inject_error", 00:13:52.653 "bdev_error_delete", 00:13:52.653 "bdev_error_create", 00:13:52.653 "bdev_split_delete", 00:13:52.653 "bdev_split_create", 00:13:52.653 "bdev_delay_delete", 00:13:52.653 "bdev_delay_create", 00:13:52.653 "bdev_delay_update_latency", 00:13:52.653 "bdev_zone_block_delete", 00:13:52.653 "bdev_zone_block_create", 00:13:52.653 "blobfs_create", 00:13:52.653 "blobfs_detect", 00:13:52.653 "blobfs_set_cache_size", 00:13:52.653 "bdev_aio_delete", 00:13:52.653 "bdev_aio_rescan", 00:13:52.653 "bdev_aio_create", 00:13:52.653 "bdev_ftl_set_property", 00:13:52.653 "bdev_ftl_get_properties", 00:13:52.653 "bdev_ftl_get_stats", 00:13:52.653 "bdev_ftl_unmap", 00:13:52.653 "bdev_ftl_unload", 00:13:52.653 "bdev_ftl_delete", 00:13:52.653 "bdev_ftl_load", 00:13:52.653 "bdev_ftl_create", 00:13:52.653 "bdev_virtio_attach_controller", 00:13:52.653 "bdev_virtio_scsi_get_devices", 00:13:52.653 "bdev_virtio_detach_controller", 00:13:52.653 "bdev_virtio_blk_set_hotplug", 00:13:52.653 "bdev_iscsi_delete", 00:13:52.653 "bdev_iscsi_create", 00:13:52.653 "bdev_iscsi_set_options", 00:13:52.653 "bdev_uring_delete", 00:13:52.653 "bdev_uring_rescan", 00:13:52.653 "bdev_uring_create", 00:13:52.653 "accel_error_inject_error", 00:13:52.653 "ioat_scan_accel_module", 00:13:52.653 "dsa_scan_accel_module", 00:13:52.653 "iaa_scan_accel_module", 00:13:52.653 "keyring_file_remove_key", 00:13:52.653 "keyring_file_add_key", 00:13:52.653 "iscsi_set_options", 00:13:52.653 "iscsi_get_auth_groups", 00:13:52.653 "iscsi_auth_group_remove_secret", 00:13:52.653 "iscsi_auth_group_add_secret", 00:13:52.653 "iscsi_delete_auth_group", 00:13:52.653 "iscsi_create_auth_group", 00:13:52.653 "iscsi_set_discovery_auth", 00:13:52.653 "iscsi_get_options", 00:13:52.653 "iscsi_target_node_request_logout", 00:13:52.653 "iscsi_target_node_set_redirect", 00:13:52.653 "iscsi_target_node_set_auth", 00:13:52.653 "iscsi_target_node_add_lun", 00:13:52.653 "iscsi_get_stats", 00:13:52.653 "iscsi_get_connections", 00:13:52.653 "iscsi_portal_group_set_auth", 00:13:52.653 "iscsi_start_portal_group", 00:13:52.653 "iscsi_delete_portal_group", 00:13:52.653 "iscsi_create_portal_group", 00:13:52.653 "iscsi_get_portal_groups", 00:13:52.653 "iscsi_delete_target_node", 00:13:52.653 "iscsi_target_node_remove_pg_ig_maps", 00:13:52.653 "iscsi_target_node_add_pg_ig_maps", 00:13:52.653 "iscsi_create_target_node", 00:13:52.653 "iscsi_get_target_nodes", 00:13:52.653 "iscsi_delete_initiator_group", 00:13:52.653 "iscsi_initiator_group_remove_initiators", 00:13:52.653 "iscsi_initiator_group_add_initiators", 00:13:52.653 "iscsi_create_initiator_group", 00:13:52.653 "iscsi_get_initiator_groups", 00:13:52.653 "nvmf_set_crdt", 00:13:52.653 "nvmf_set_config", 00:13:52.653 "nvmf_set_max_subsystems", 00:13:52.653 "nvmf_subsystem_get_listeners", 00:13:52.653 "nvmf_subsystem_get_qpairs", 00:13:52.653 "nvmf_subsystem_get_controllers", 00:13:52.653 "nvmf_get_stats", 00:13:52.653 "nvmf_get_transports", 00:13:52.653 "nvmf_create_transport", 00:13:52.653 "nvmf_get_targets", 00:13:52.653 "nvmf_delete_target", 00:13:52.653 "nvmf_create_target", 00:13:52.653 "nvmf_subsystem_allow_any_host", 00:13:52.653 "nvmf_subsystem_remove_host", 00:13:52.653 "nvmf_subsystem_add_host", 00:13:52.653 "nvmf_ns_remove_host", 00:13:52.653 "nvmf_ns_add_host", 00:13:52.653 "nvmf_subsystem_remove_ns", 00:13:52.653 "nvmf_subsystem_add_ns", 00:13:52.653 "nvmf_subsystem_listener_set_ana_state", 00:13:52.653 "nvmf_discovery_get_referrals", 00:13:52.653 "nvmf_discovery_remove_referral", 00:13:52.653 "nvmf_discovery_add_referral", 00:13:52.653 "nvmf_subsystem_remove_listener", 00:13:52.653 "nvmf_subsystem_add_listener", 00:13:52.653 "nvmf_delete_subsystem", 00:13:52.653 "nvmf_create_subsystem", 00:13:52.653 "nvmf_get_subsystems", 00:13:52.653 "env_dpdk_get_mem_stats", 00:13:52.653 "nbd_get_disks", 00:13:52.653 "nbd_stop_disk", 00:13:52.653 "nbd_start_disk", 00:13:52.653 "ublk_recover_disk", 00:13:52.653 "ublk_get_disks", 00:13:52.653 "ublk_stop_disk", 00:13:52.653 "ublk_start_disk", 00:13:52.653 "ublk_destroy_target", 00:13:52.653 "ublk_create_target", 00:13:52.653 "virtio_blk_create_transport", 00:13:52.653 "virtio_blk_get_transports", 00:13:52.653 "vhost_controller_set_coalescing", 00:13:52.653 "vhost_get_controllers", 00:13:52.653 "vhost_delete_controller", 00:13:52.653 "vhost_create_blk_controller", 00:13:52.653 "vhost_scsi_controller_remove_target", 00:13:52.653 "vhost_scsi_controller_add_target", 00:13:52.653 "vhost_start_scsi_controller", 00:13:52.653 "vhost_create_scsi_controller", 00:13:52.653 "thread_set_cpumask", 00:13:52.653 "framework_get_scheduler", 00:13:52.653 "framework_set_scheduler", 00:13:52.653 "framework_get_reactors", 00:13:52.653 "thread_get_io_channels", 00:13:52.653 "thread_get_pollers", 00:13:52.653 "thread_get_stats", 00:13:52.653 "framework_monitor_context_switch", 00:13:52.653 "spdk_kill_instance", 00:13:52.653 "log_enable_timestamps", 00:13:52.653 "log_get_flags", 00:13:52.653 "log_clear_flag", 00:13:52.653 "log_set_flag", 00:13:52.653 "log_get_level", 00:13:52.653 "log_set_level", 00:13:52.653 "log_get_print_level", 00:13:52.653 "log_set_print_level", 00:13:52.653 "framework_enable_cpumask_locks", 00:13:52.653 "framework_disable_cpumask_locks", 00:13:52.653 "framework_wait_init", 00:13:52.653 "framework_start_init", 00:13:52.654 "scsi_get_devices", 00:13:52.654 "bdev_get_histogram", 00:13:52.654 "bdev_enable_histogram", 00:13:52.654 "bdev_set_qos_limit", 00:13:52.654 "bdev_set_qd_sampling_period", 00:13:52.654 "bdev_get_bdevs", 00:13:52.654 "bdev_reset_iostat", 00:13:52.654 "bdev_get_iostat", 00:13:52.654 "bdev_examine", 00:13:52.654 "bdev_wait_for_examine", 00:13:52.654 "bdev_set_options", 00:13:52.654 "notify_get_notifications", 00:13:52.654 "notify_get_types", 00:13:52.654 "accel_get_stats", 00:13:52.654 "accel_set_options", 00:13:52.654 "accel_set_driver", 00:13:52.654 "accel_crypto_key_destroy", 00:13:52.654 "accel_crypto_keys_get", 00:13:52.654 "accel_crypto_key_create", 00:13:52.654 "accel_assign_opc", 00:13:52.654 "accel_get_module_info", 00:13:52.654 "accel_get_opc_assignments", 00:13:52.654 "vmd_rescan", 00:13:52.654 "vmd_remove_device", 00:13:52.654 "vmd_enable", 00:13:52.654 "sock_set_default_impl", 00:13:52.654 "sock_impl_set_options", 00:13:52.654 "sock_impl_get_options", 00:13:52.654 "iobuf_get_stats", 00:13:52.654 "iobuf_set_options", 00:13:52.654 "framework_get_pci_devices", 00:13:52.654 "framework_get_config", 00:13:52.654 "framework_get_subsystems", 00:13:52.654 "trace_get_info", 00:13:52.654 "trace_get_tpoint_group_mask", 00:13:52.654 "trace_disable_tpoint_group", 00:13:52.654 "trace_enable_tpoint_group", 00:13:52.654 "trace_clear_tpoint_mask", 00:13:52.654 "trace_set_tpoint_mask", 00:13:52.654 "keyring_get_keys", 00:13:52.654 "spdk_get_version", 00:13:52.654 "rpc_get_methods" 00:13:52.654 ] 00:13:52.654 09:42:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:13:52.654 09:42:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:52.654 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.654 09:42:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:52.654 09:42:15 -- spdkcli/tcp.sh@38 -- # killprocess 59820 00:13:52.654 09:42:15 -- common/autotest_common.sh@936 -- # '[' -z 59820 ']' 00:13:52.654 09:42:15 -- common/autotest_common.sh@940 -- # kill -0 59820 00:13:52.654 09:42:15 -- common/autotest_common.sh@941 -- # uname 00:13:52.654 09:42:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:52.654 09:42:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59820 00:13:52.654 killing process with pid 59820 00:13:52.654 09:42:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:52.654 09:42:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:52.654 09:42:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59820' 00:13:52.654 09:42:15 -- common/autotest_common.sh@955 -- # kill 59820 00:13:52.654 09:42:15 -- common/autotest_common.sh@960 -- # wait 59820 00:13:55.190 ************************************ 00:13:55.190 END TEST spdkcli_tcp 00:13:55.190 ************************************ 00:13:55.190 00:13:55.190 real 0m4.117s 00:13:55.190 user 0m7.118s 00:13:55.190 sys 0m0.697s 00:13:55.190 09:42:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:55.190 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:42:18 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:55.190 09:42:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:55.190 09:42:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.190 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 ************************************ 00:13:55.190 START TEST dpdk_mem_utility 00:13:55.190 ************************************ 00:13:55.190 09:42:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:55.190 * Looking for test storage... 00:13:55.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:13:55.190 09:42:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:55.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.190 09:42:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59939 00:13:55.190 09:42:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59939 00:13:55.190 09:42:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:55.190 09:42:18 -- common/autotest_common.sh@817 -- # '[' -z 59939 ']' 00:13:55.190 09:42:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.190 09:42:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:55.190 09:42:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.190 09:42:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:55.190 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 [2024-04-18 09:42:18.567398] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:55.190 [2024-04-18 09:42:18.567593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:13:55.449 [2024-04-18 09:42:18.750741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.708 [2024-04-18 09:42:19.014381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.647 09:42:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.647 09:42:19 -- common/autotest_common.sh@850 -- # return 0 00:13:56.647 09:42:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:13:56.647 09:42:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:13:56.647 09:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.647 09:42:19 -- common/autotest_common.sh@10 -- # set +x 00:13:56.647 { 00:13:56.647 "filename": "/tmp/spdk_mem_dump.txt" 00:13:56.647 } 00:13:56.647 09:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.647 09:42:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:56.647 DPDK memory size 820.000000 MiB in 1 heap(s) 00:13:56.647 1 heaps totaling size 820.000000 MiB 00:13:56.647 size: 820.000000 MiB heap id: 0 00:13:56.647 end heaps---------- 00:13:56.647 8 mempools totaling size 598.116089 MiB 00:13:56.647 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:13:56.647 size: 158.602051 MiB name: PDU_data_out_Pool 00:13:56.647 size: 84.521057 MiB name: bdev_io_59939 00:13:56.647 size: 51.011292 MiB name: evtpool_59939 00:13:56.647 size: 50.003479 MiB name: msgpool_59939 00:13:56.647 size: 21.763794 MiB name: PDU_Pool 00:13:56.647 size: 19.513306 MiB name: SCSI_TASK_Pool 00:13:56.647 size: 0.026123 MiB name: Session_Pool 00:13:56.647 end mempools------- 00:13:56.647 6 memzones totaling size 4.142822 MiB 00:13:56.647 size: 1.000366 MiB name: RG_ring_0_59939 00:13:56.647 size: 1.000366 MiB name: RG_ring_1_59939 00:13:56.647 size: 1.000366 MiB name: RG_ring_4_59939 00:13:56.647 size: 1.000366 MiB name: RG_ring_5_59939 00:13:56.647 size: 0.125366 MiB name: RG_ring_2_59939 00:13:56.647 size: 0.015991 MiB name: RG_ring_3_59939 00:13:56.647 end memzones------- 00:13:56.647 09:42:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:13:56.907 heap id: 0 total size: 820.000000 MiB number of busy elements: 298 number of free elements: 18 00:13:56.907 list of free elements. size: 18.452026 MiB 00:13:56.907 element at address: 0x200000400000 with size: 1.999451 MiB 00:13:56.907 element at address: 0x200000800000 with size: 1.996887 MiB 00:13:56.907 element at address: 0x200007000000 with size: 1.995972 MiB 00:13:56.907 element at address: 0x20000b200000 with size: 1.995972 MiB 00:13:56.907 element at address: 0x200019100040 with size: 0.999939 MiB 00:13:56.907 element at address: 0x200019500040 with size: 0.999939 MiB 00:13:56.907 element at address: 0x200019600000 with size: 0.999084 MiB 00:13:56.907 element at address: 0x200003e00000 with size: 0.996094 MiB 00:13:56.907 element at address: 0x200032200000 with size: 0.994324 MiB 00:13:56.907 element at address: 0x200018e00000 with size: 0.959656 MiB 00:13:56.907 element at address: 0x200019900040 with size: 0.936401 MiB 00:13:56.907 element at address: 0x200000200000 with size: 0.830200 MiB 00:13:56.907 element at address: 0x20001b000000 with size: 0.564636 MiB 00:13:56.907 element at address: 0x200019200000 with size: 0.487976 MiB 00:13:56.907 element at address: 0x200019a00000 with size: 0.485413 MiB 00:13:56.907 element at address: 0x200013800000 with size: 0.467651 MiB 00:13:56.907 element at address: 0x200028400000 with size: 0.390442 MiB 00:13:56.907 element at address: 0x200003a00000 with size: 0.351990 MiB 00:13:56.907 list of standard malloc elements. size: 199.283569 MiB 00:13:56.907 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:13:56.907 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:13:56.907 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:13:56.907 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:13:56.907 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:13:56.907 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:13:56.907 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:13:56.907 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:13:56.907 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:13:56.907 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:13:56.907 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:13:56.907 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:13:56.907 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003aff980 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003affa80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200003eff000 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013877b80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013877c80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013877d80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013877e80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013877f80 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013878080 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013878180 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013878280 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013878380 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013878480 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200013878580 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x200019abc680 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:13:56.908 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:13:56.909 element at address: 0x200028463f40 with size: 0.000244 MiB 00:13:56.909 element at address: 0x200028464040 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846af80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b080 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b180 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b280 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b380 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b480 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b580 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b680 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b780 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b880 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846b980 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846be80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c080 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c180 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c280 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c380 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c480 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c580 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c680 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c780 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c880 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846c980 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d080 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d180 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d280 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d380 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d480 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d580 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d680 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d780 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d880 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846d980 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846da80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846db80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846de80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846df80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e080 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e180 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e280 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e380 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e480 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e580 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e680 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e780 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e880 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846e980 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f080 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f180 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f280 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f380 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f480 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f580 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f680 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f780 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f880 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846f980 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:13:56.909 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:13:56.909 list of memzone associated elements. size: 602.264404 MiB 00:13:56.909 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:13:56.909 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:13:56.909 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:13:56.909 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:13:56.909 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:13:56.909 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59939_0 00:13:56.909 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:13:56.909 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59939_0 00:13:56.909 element at address: 0x200003fff340 with size: 48.003113 MiB 00:13:56.909 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59939_0 00:13:56.909 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:13:56.909 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:13:56.909 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:13:56.909 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:13:56.909 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:13:56.909 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59939 00:13:56.909 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:13:56.909 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59939 00:13:56.909 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:13:56.909 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59939 00:13:56.909 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:13:56.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:13:56.909 element at address: 0x200019abc780 with size: 1.008179 MiB 00:13:56.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:13:56.909 element at address: 0x200018efde00 with size: 1.008179 MiB 00:13:56.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:13:56.909 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:13:56.909 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:13:56.909 element at address: 0x200003eff100 with size: 1.000549 MiB 00:13:56.909 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59939 00:13:56.909 element at address: 0x200003affb80 with size: 1.000549 MiB 00:13:56.909 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59939 00:13:56.909 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:13:56.909 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59939 00:13:56.909 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:13:56.909 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59939 00:13:56.909 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:13:56.909 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59939 00:13:56.909 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:13:56.909 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:13:56.909 element at address: 0x200013878680 with size: 0.500549 MiB 00:13:56.909 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:13:56.909 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:13:56.909 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:13:56.909 element at address: 0x200003adf740 with size: 0.125549 MiB 00:13:56.909 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59939 00:13:56.909 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:13:56.910 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:13:56.910 element at address: 0x200028464140 with size: 0.023804 MiB 00:13:56.910 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:13:56.910 element at address: 0x200003adb500 with size: 0.016174 MiB 00:13:56.910 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59939 00:13:56.910 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:13:56.910 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:13:56.910 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:13:56.910 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59939 00:13:56.910 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:13:56.910 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59939 00:13:56.910 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:13:56.910 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:13:56.910 09:42:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:13:56.910 09:42:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59939 00:13:56.910 09:42:20 -- common/autotest_common.sh@936 -- # '[' -z 59939 ']' 00:13:56.910 09:42:20 -- common/autotest_common.sh@940 -- # kill -0 59939 00:13:56.910 09:42:20 -- common/autotest_common.sh@941 -- # uname 00:13:56.910 09:42:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.910 09:42:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59939 00:13:56.910 killing process with pid 59939 00:13:56.910 09:42:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:56.910 09:42:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:56.910 09:42:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59939' 00:13:56.910 09:42:20 -- common/autotest_common.sh@955 -- # kill 59939 00:13:56.910 09:42:20 -- common/autotest_common.sh@960 -- # wait 59939 00:13:59.443 00:13:59.443 real 0m4.150s 00:13:59.443 user 0m4.147s 00:13:59.443 sys 0m0.653s 00:13:59.443 09:42:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.443 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:13:59.443 ************************************ 00:13:59.443 END TEST dpdk_mem_utility 00:13:59.443 ************************************ 00:13:59.443 09:42:22 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:59.443 09:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.443 09:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.443 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:13:59.443 ************************************ 00:13:59.443 START TEST event 00:13:59.443 ************************************ 00:13:59.443 09:42:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:59.443 * Looking for test storage... 00:13:59.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:59.443 09:42:22 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:59.443 09:42:22 -- bdev/nbd_common.sh@6 -- # set -e 00:13:59.443 09:42:22 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:59.443 09:42:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:13:59.443 09:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.443 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:13:59.443 ************************************ 00:13:59.443 START TEST event_perf 00:13:59.443 ************************************ 00:13:59.443 09:42:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:59.702 Running I/O for 1 seconds...[2024-04-18 09:42:22.841382] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:59.702 [2024-04-18 09:42:22.841701] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:13:59.702 [2024-04-18 09:42:23.014082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.959 [2024-04-18 09:42:23.311775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.959 [2024-04-18 09:42:23.312057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.959 Running I/O for 1 seconds...[2024-04-18 09:42:23.311997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.959 [2024-04-18 09:42:23.312865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.335 00:14:01.335 lcore 0: 109088 00:14:01.335 lcore 1: 109090 00:14:01.335 lcore 2: 109087 00:14:01.335 lcore 3: 109085 00:14:01.335 done. 00:14:01.335 00:14:01.335 ************************************ 00:14:01.335 END TEST event_perf 00:14:01.335 ************************************ 00:14:01.335 real 0m1.905s 00:14:01.335 user 0m4.615s 00:14:01.335 sys 0m0.138s 00:14:01.335 09:42:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:01.335 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:14:01.593 09:42:24 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:01.593 09:42:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:01.593 09:42:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.593 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:14:01.593 ************************************ 00:14:01.593 START TEST event_reactor 00:14:01.593 ************************************ 00:14:01.593 09:42:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:01.593 [2024-04-18 09:42:24.878587] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:01.593 [2024-04-18 09:42:24.878782] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:14:01.851 [2024-04-18 09:42:25.055040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.110 [2024-04-18 09:42:25.306599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.485 test_start 00:14:03.485 oneshot 00:14:03.485 tick 100 00:14:03.485 tick 100 00:14:03.485 tick 250 00:14:03.485 tick 100 00:14:03.485 tick 100 00:14:03.485 tick 250 00:14:03.485 tick 100 00:14:03.485 tick 500 00:14:03.485 tick 100 00:14:03.485 tick 100 00:14:03.485 tick 250 00:14:03.485 tick 100 00:14:03.485 tick 100 00:14:03.485 test_end 00:14:03.485 ************************************ 00:14:03.485 END TEST event_reactor 00:14:03.485 ************************************ 00:14:03.485 00:14:03.485 real 0m1.856s 00:14:03.485 user 0m1.616s 00:14:03.485 sys 0m0.129s 00:14:03.485 09:42:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:03.485 09:42:26 -- common/autotest_common.sh@10 -- # set +x 00:14:03.485 09:42:26 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:03.485 09:42:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:03.485 09:42:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.485 09:42:26 -- common/autotest_common.sh@10 -- # set +x 00:14:03.485 ************************************ 00:14:03.485 START TEST event_reactor_perf 00:14:03.485 ************************************ 00:14:03.485 09:42:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:03.485 [2024-04-18 09:42:26.863337] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:03.485 [2024-04-18 09:42:26.863536] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60146 ] 00:14:03.743 [2024-04-18 09:42:27.038046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.001 [2024-04-18 09:42:27.280339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.377 test_start 00:14:05.377 test_end 00:14:05.377 Performance: 248050 events per second 00:14:05.377 00:14:05.377 real 0m1.888s 00:14:05.377 user 0m1.637s 00:14:05.377 sys 0m0.137s 00:14:05.377 09:42:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.377 ************************************ 00:14:05.377 END TEST event_reactor_perf 00:14:05.377 ************************************ 00:14:05.377 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:14:05.377 09:42:28 -- event/event.sh@49 -- # uname -s 00:14:05.377 09:42:28 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:14:05.377 09:42:28 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:05.377 09:42:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:05.377 09:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.377 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:14:05.636 ************************************ 00:14:05.636 START TEST event_scheduler 00:14:05.636 ************************************ 00:14:05.636 09:42:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:05.636 * Looking for test storage... 00:14:05.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:14:05.636 09:42:28 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:14:05.636 09:42:28 -- scheduler/scheduler.sh@35 -- # scheduler_pid=60219 00:14:05.636 09:42:28 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:14:05.636 09:42:28 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:14:05.636 09:42:28 -- scheduler/scheduler.sh@37 -- # waitforlisten 60219 00:14:05.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.636 09:42:28 -- common/autotest_common.sh@817 -- # '[' -z 60219 ']' 00:14:05.636 09:42:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.636 09:42:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.636 09:42:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.636 09:42:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.636 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:14:05.636 [2024-04-18 09:42:28.999098] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:05.636 [2024-04-18 09:42:28.999504] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60219 ] 00:14:05.895 [2024-04-18 09:42:29.172321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.154 [2024-04-18 09:42:29.486934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.154 [2024-04-18 09:42:29.487074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.154 [2024-04-18 09:42:29.487211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.154 [2024-04-18 09:42:29.487443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.724 09:42:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.724 09:42:30 -- common/autotest_common.sh@850 -- # return 0 00:14:06.724 09:42:30 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:14:06.724 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.724 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:06.724 POWER: Env isn't set yet! 00:14:06.724 POWER: Attempting to initialise ACPI cpufreq power management... 00:14:06.724 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:06.724 POWER: Cannot set governor of lcore 0 to userspace 00:14:06.724 POWER: Attempting to initialise PSTAT power management... 00:14:06.724 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:06.724 POWER: Cannot set governor of lcore 0 to performance 00:14:06.724 POWER: Attempting to initialise AMD PSTATE power management... 00:14:06.724 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:06.724 POWER: Cannot set governor of lcore 0 to userspace 00:14:06.724 POWER: Attempting to initialise CPPC power management... 00:14:06.724 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:06.724 POWER: Cannot set governor of lcore 0 to userspace 00:14:06.724 POWER: Attempting to initialise VM power management... 00:14:06.724 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:14:06.724 POWER: Unable to set Power Management Environment for lcore 0 00:14:06.724 [2024-04-18 09:42:30.061393] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:14:06.724 [2024-04-18 09:42:30.061425] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:14:06.724 [2024-04-18 09:42:30.061440] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:14:06.724 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.724 09:42:30 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:14:06.724 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.724 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 [2024-04-18 09:42:30.404285] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:14:07.290 09:42:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:07.290 09:42:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 ************************************ 00:14:07.290 START TEST scheduler_create_thread 00:14:07.290 ************************************ 00:14:07.290 09:42:30 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 2 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 3 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 4 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 5 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 6 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 7 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 8 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 9 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 10 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:07.290 09:42:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.290 09:42:30 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:14:07.290 09:42:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.290 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:14:08.226 09:42:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.226 09:42:31 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:14:08.226 09:42:31 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:14:08.226 09:42:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.226 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:14:09.600 ************************************ 00:14:09.600 END TEST scheduler_create_thread 00:14:09.600 ************************************ 00:14:09.600 09:42:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.600 00:14:09.600 real 0m2.138s 00:14:09.600 user 0m0.019s 00:14:09.600 sys 0m0.007s 00:14:09.600 09:42:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.600 09:42:32 -- common/autotest_common.sh@10 -- # set +x 00:14:09.600 09:42:32 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:09.600 09:42:32 -- scheduler/scheduler.sh@46 -- # killprocess 60219 00:14:09.600 09:42:32 -- common/autotest_common.sh@936 -- # '[' -z 60219 ']' 00:14:09.600 09:42:32 -- common/autotest_common.sh@940 -- # kill -0 60219 00:14:09.600 09:42:32 -- common/autotest_common.sh@941 -- # uname 00:14:09.600 09:42:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.600 09:42:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60219 00:14:09.600 killing process with pid 60219 00:14:09.600 09:42:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:09.600 09:42:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:09.600 09:42:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60219' 00:14:09.600 09:42:32 -- common/autotest_common.sh@955 -- # kill 60219 00:14:09.600 09:42:32 -- common/autotest_common.sh@960 -- # wait 60219 00:14:09.860 [2024-04-18 09:42:33.103327] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:14:11.237 00:14:11.237 real 0m5.565s 00:14:11.237 user 0m9.804s 00:14:11.237 sys 0m0.610s 00:14:11.237 09:42:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:11.237 ************************************ 00:14:11.237 END TEST event_scheduler 00:14:11.237 ************************************ 00:14:11.237 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:14:11.237 09:42:34 -- event/event.sh@51 -- # modprobe -n nbd 00:14:11.237 09:42:34 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:14:11.237 09:42:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:11.237 09:42:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.237 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:14:11.237 ************************************ 00:14:11.237 START TEST app_repeat 00:14:11.237 ************************************ 00:14:11.237 09:42:34 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:14:11.237 09:42:34 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.237 09:42:34 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.237 09:42:34 -- event/event.sh@13 -- # local nbd_list 00:14:11.237 09:42:34 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:11.237 09:42:34 -- event/event.sh@14 -- # local bdev_list 00:14:11.237 09:42:34 -- event/event.sh@15 -- # local repeat_times=4 00:14:11.237 09:42:34 -- event/event.sh@17 -- # modprobe nbd 00:14:11.237 Process app_repeat pid: 60333 00:14:11.237 spdk_app_start Round 0 00:14:11.237 09:42:34 -- event/event.sh@19 -- # repeat_pid=60333 00:14:11.237 09:42:34 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:14:11.237 09:42:34 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60333' 00:14:11.237 09:42:34 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:14:11.237 09:42:34 -- event/event.sh@23 -- # for i in {0..2} 00:14:11.237 09:42:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:14:11.237 09:42:34 -- event/event.sh@25 -- # waitforlisten 60333 /var/tmp/spdk-nbd.sock 00:14:11.237 09:42:34 -- common/autotest_common.sh@817 -- # '[' -z 60333 ']' 00:14:11.237 09:42:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:11.237 09:42:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:11.237 09:42:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:11.237 09:42:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.237 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:14:11.237 [2024-04-18 09:42:34.586627] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:11.237 [2024-04-18 09:42:34.586803] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60333 ] 00:14:11.495 [2024-04-18 09:42:34.762444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:11.755 [2024-04-18 09:42:35.054247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.755 [2024-04-18 09:42:35.054257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.323 09:42:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.323 09:42:35 -- common/autotest_common.sh@850 -- # return 0 00:14:12.323 09:42:35 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:12.581 Malloc0 00:14:12.581 09:42:35 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:12.839 Malloc1 00:14:12.839 09:42:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@12 -- # local i 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.839 09:42:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:13.098 /dev/nbd0 00:14:13.098 09:42:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.098 09:42:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.098 09:42:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:13.098 09:42:36 -- common/autotest_common.sh@855 -- # local i 00:14:13.098 09:42:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:13.098 09:42:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:13.098 09:42:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:13.098 09:42:36 -- common/autotest_common.sh@859 -- # break 00:14:13.098 09:42:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:13.098 09:42:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:13.098 09:42:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:13.098 1+0 records in 00:14:13.098 1+0 records out 00:14:13.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343541 s, 11.9 MB/s 00:14:13.098 09:42:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:13.098 09:42:36 -- common/autotest_common.sh@872 -- # size=4096 00:14:13.098 09:42:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:13.098 09:42:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:13.098 09:42:36 -- common/autotest_common.sh@875 -- # return 0 00:14:13.098 09:42:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.098 09:42:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.098 09:42:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:13.356 /dev/nbd1 00:14:13.356 09:42:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.616 09:42:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.616 09:42:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:13.616 09:42:36 -- common/autotest_common.sh@855 -- # local i 00:14:13.616 09:42:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:13.616 09:42:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:13.616 09:42:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:13.616 09:42:36 -- common/autotest_common.sh@859 -- # break 00:14:13.616 09:42:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:13.616 09:42:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:13.616 09:42:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:13.616 1+0 records in 00:14:13.616 1+0 records out 00:14:13.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787141 s, 5.2 MB/s 00:14:13.616 09:42:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:13.616 09:42:36 -- common/autotest_common.sh@872 -- # size=4096 00:14:13.616 09:42:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:13.616 09:42:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:13.616 09:42:36 -- common/autotest_common.sh@875 -- # return 0 00:14:13.616 09:42:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.616 09:42:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.616 09:42:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:13.616 09:42:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.616 09:42:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:13.879 { 00:14:13.879 "nbd_device": "/dev/nbd0", 00:14:13.879 "bdev_name": "Malloc0" 00:14:13.879 }, 00:14:13.879 { 00:14:13.879 "nbd_device": "/dev/nbd1", 00:14:13.879 "bdev_name": "Malloc1" 00:14:13.879 } 00:14:13.879 ]' 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:13.879 { 00:14:13.879 "nbd_device": "/dev/nbd0", 00:14:13.879 "bdev_name": "Malloc0" 00:14:13.879 }, 00:14:13.879 { 00:14:13.879 "nbd_device": "/dev/nbd1", 00:14:13.879 "bdev_name": "Malloc1" 00:14:13.879 } 00:14:13.879 ]' 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:13.879 /dev/nbd1' 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:13.879 /dev/nbd1' 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@65 -- # count=2 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@95 -- # count=2 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.879 09:42:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:13.880 256+0 records in 00:14:13.880 256+0 records out 00:14:13.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112834 s, 92.9 MB/s 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:13.880 256+0 records in 00:14:13.880 256+0 records out 00:14:13.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307313 s, 34.1 MB/s 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:13.880 256+0 records in 00:14:13.880 256+0 records out 00:14:13.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312285 s, 33.6 MB/s 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@51 -- # local i 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.880 09:42:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@41 -- # break 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.139 09:42:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@41 -- # break 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.398 09:42:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:14.657 09:42:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:14.657 09:42:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:14.657 09:42:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:14.915 09:42:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@65 -- # true 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@65 -- # count=0 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@104 -- # count=0 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:14.916 09:42:38 -- bdev/nbd_common.sh@109 -- # return 0 00:14:14.916 09:42:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:15.175 09:42:38 -- event/event.sh@35 -- # sleep 3 00:14:16.550 [2024-04-18 09:42:39.760012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:16.808 [2024-04-18 09:42:39.991313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.808 [2024-04-18 09:42:39.991315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.808 [2024-04-18 09:42:40.184279] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:16.808 [2024-04-18 09:42:40.184343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:18.185 spdk_app_start Round 1 00:14:18.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:18.185 09:42:41 -- event/event.sh@23 -- # for i in {0..2} 00:14:18.185 09:42:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:14:18.185 09:42:41 -- event/event.sh@25 -- # waitforlisten 60333 /var/tmp/spdk-nbd.sock 00:14:18.185 09:42:41 -- common/autotest_common.sh@817 -- # '[' -z 60333 ']' 00:14:18.185 09:42:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:18.185 09:42:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.185 09:42:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:18.185 09:42:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.185 09:42:41 -- common/autotest_common.sh@10 -- # set +x 00:14:18.442 09:42:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:18.442 09:42:41 -- common/autotest_common.sh@850 -- # return 0 00:14:18.442 09:42:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:18.702 Malloc0 00:14:18.960 09:42:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:19.218 Malloc1 00:14:19.218 09:42:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@12 -- # local i 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.218 09:42:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:19.476 /dev/nbd0 00:14:19.476 09:42:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.476 09:42:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.476 09:42:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:19.476 09:42:42 -- common/autotest_common.sh@855 -- # local i 00:14:19.476 09:42:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:19.476 09:42:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:19.476 09:42:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:19.476 09:42:42 -- common/autotest_common.sh@859 -- # break 00:14:19.476 09:42:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:19.476 09:42:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:19.476 09:42:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:19.476 1+0 records in 00:14:19.476 1+0 records out 00:14:19.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278938 s, 14.7 MB/s 00:14:19.476 09:42:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:19.476 09:42:42 -- common/autotest_common.sh@872 -- # size=4096 00:14:19.476 09:42:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:19.476 09:42:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:19.476 09:42:42 -- common/autotest_common.sh@875 -- # return 0 00:14:19.476 09:42:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.476 09:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.476 09:42:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:19.735 /dev/nbd1 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:19.735 09:42:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:19.735 09:42:43 -- common/autotest_common.sh@855 -- # local i 00:14:19.735 09:42:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:19.735 09:42:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:19.735 09:42:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:19.735 09:42:43 -- common/autotest_common.sh@859 -- # break 00:14:19.735 09:42:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:19.735 09:42:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:19.735 09:42:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:19.735 1+0 records in 00:14:19.735 1+0 records out 00:14:19.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364723 s, 11.2 MB/s 00:14:19.735 09:42:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:19.735 09:42:43 -- common/autotest_common.sh@872 -- # size=4096 00:14:19.735 09:42:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:19.735 09:42:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:19.735 09:42:43 -- common/autotest_common.sh@875 -- # return 0 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.735 09:42:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:19.993 09:42:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:19.993 { 00:14:19.993 "nbd_device": "/dev/nbd0", 00:14:19.994 "bdev_name": "Malloc0" 00:14:19.994 }, 00:14:19.994 { 00:14:19.994 "nbd_device": "/dev/nbd1", 00:14:19.994 "bdev_name": "Malloc1" 00:14:19.994 } 00:14:19.994 ]' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:19.994 { 00:14:19.994 "nbd_device": "/dev/nbd0", 00:14:19.994 "bdev_name": "Malloc0" 00:14:19.994 }, 00:14:19.994 { 00:14:19.994 "nbd_device": "/dev/nbd1", 00:14:19.994 "bdev_name": "Malloc1" 00:14:19.994 } 00:14:19.994 ]' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:19.994 /dev/nbd1' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:19.994 /dev/nbd1' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@65 -- # count=2 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@95 -- # count=2 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:19.994 09:42:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:20.265 256+0 records in 00:14:20.265 256+0 records out 00:14:20.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00835116 s, 126 MB/s 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:20.265 256+0 records in 00:14:20.265 256+0 records out 00:14:20.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314188 s, 33.4 MB/s 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:20.265 256+0 records in 00:14:20.265 256+0 records out 00:14:20.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0345012 s, 30.4 MB/s 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@51 -- # local i 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.265 09:42:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@41 -- # break 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.540 09:42:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@41 -- # break 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.798 09:42:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@65 -- # true 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@65 -- # count=0 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@104 -- # count=0 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:21.056 09:42:44 -- bdev/nbd_common.sh@109 -- # return 0 00:14:21.056 09:42:44 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:21.623 09:42:44 -- event/event.sh@35 -- # sleep 3 00:14:22.995 [2024-04-18 09:42:46.007277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:22.995 [2024-04-18 09:42:46.240325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.995 [2024-04-18 09:42:46.240329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.253 [2024-04-18 09:42:46.430863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:23.253 [2024-04-18 09:42:46.430978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:24.660 spdk_app_start Round 2 00:14:24.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:24.660 09:42:47 -- event/event.sh@23 -- # for i in {0..2} 00:14:24.660 09:42:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:14:24.660 09:42:47 -- event/event.sh@25 -- # waitforlisten 60333 /var/tmp/spdk-nbd.sock 00:14:24.660 09:42:47 -- common/autotest_common.sh@817 -- # '[' -z 60333 ']' 00:14:24.660 09:42:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:24.660 09:42:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.660 09:42:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:24.660 09:42:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.660 09:42:47 -- common/autotest_common.sh@10 -- # set +x 00:14:24.918 09:42:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.918 09:42:48 -- common/autotest_common.sh@850 -- # return 0 00:14:24.918 09:42:48 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:25.177 Malloc0 00:14:25.177 09:42:48 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:25.435 Malloc1 00:14:25.435 09:42:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@12 -- # local i 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.435 09:42:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:25.693 /dev/nbd0 00:14:25.693 09:42:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.693 09:42:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.693 09:42:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:25.693 09:42:48 -- common/autotest_common.sh@855 -- # local i 00:14:25.693 09:42:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:25.693 09:42:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:25.693 09:42:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:25.693 09:42:48 -- common/autotest_common.sh@859 -- # break 00:14:25.693 09:42:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:25.693 09:42:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:25.693 09:42:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:25.693 1+0 records in 00:14:25.693 1+0 records out 00:14:25.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352097 s, 11.6 MB/s 00:14:25.693 09:42:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:25.693 09:42:48 -- common/autotest_common.sh@872 -- # size=4096 00:14:25.693 09:42:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:25.693 09:42:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:25.693 09:42:48 -- common/autotest_common.sh@875 -- # return 0 00:14:25.693 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.693 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.693 09:42:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:25.951 /dev/nbd1 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.951 09:42:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:25.951 09:42:49 -- common/autotest_common.sh@855 -- # local i 00:14:25.951 09:42:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:25.951 09:42:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:25.951 09:42:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:25.951 09:42:49 -- common/autotest_common.sh@859 -- # break 00:14:25.951 09:42:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:25.951 09:42:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:25.951 09:42:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:25.951 1+0 records in 00:14:25.951 1+0 records out 00:14:25.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461936 s, 8.9 MB/s 00:14:25.951 09:42:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:25.951 09:42:49 -- common/autotest_common.sh@872 -- # size=4096 00:14:25.951 09:42:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:25.951 09:42:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:25.951 09:42:49 -- common/autotest_common.sh@875 -- # return 0 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.951 09:42:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:26.209 09:42:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:26.209 { 00:14:26.209 "nbd_device": "/dev/nbd0", 00:14:26.209 "bdev_name": "Malloc0" 00:14:26.209 }, 00:14:26.209 { 00:14:26.209 "nbd_device": "/dev/nbd1", 00:14:26.209 "bdev_name": "Malloc1" 00:14:26.209 } 00:14:26.209 ]' 00:14:26.209 09:42:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:26.209 { 00:14:26.209 "nbd_device": "/dev/nbd0", 00:14:26.209 "bdev_name": "Malloc0" 00:14:26.209 }, 00:14:26.209 { 00:14:26.209 "nbd_device": "/dev/nbd1", 00:14:26.209 "bdev_name": "Malloc1" 00:14:26.209 } 00:14:26.209 ]' 00:14:26.209 09:42:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:26.467 /dev/nbd1' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:26.467 /dev/nbd1' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@65 -- # count=2 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@95 -- # count=2 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:26.467 256+0 records in 00:14:26.467 256+0 records out 00:14:26.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121771 s, 86.1 MB/s 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:26.467 256+0 records in 00:14:26.467 256+0 records out 00:14:26.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321762 s, 32.6 MB/s 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:26.467 256+0 records in 00:14:26.467 256+0 records out 00:14:26.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0412738 s, 25.4 MB/s 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@51 -- # local i 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.467 09:42:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@41 -- # break 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.725 09:42:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@41 -- # break 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.983 09:42:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@65 -- # true 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@65 -- # count=0 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@104 -- # count=0 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:27.242 09:42:50 -- bdev/nbd_common.sh@109 -- # return 0 00:14:27.242 09:42:50 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:27.809 09:42:51 -- event/event.sh@35 -- # sleep 3 00:14:29.187 [2024-04-18 09:42:52.229108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.187 [2024-04-18 09:42:52.463272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.187 [2024-04-18 09:42:52.463274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.446 [2024-04-18 09:42:52.656934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:29.446 [2024-04-18 09:42:52.657032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:30.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:30.825 09:42:54 -- event/event.sh@38 -- # waitforlisten 60333 /var/tmp/spdk-nbd.sock 00:14:30.825 09:42:54 -- common/autotest_common.sh@817 -- # '[' -z 60333 ']' 00:14:30.825 09:42:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:30.825 09:42:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:30.825 09:42:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:30.825 09:42:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:30.825 09:42:54 -- common/autotest_common.sh@10 -- # set +x 00:14:31.141 09:42:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.141 09:42:54 -- common/autotest_common.sh@850 -- # return 0 00:14:31.141 09:42:54 -- event/event.sh@39 -- # killprocess 60333 00:14:31.141 09:42:54 -- common/autotest_common.sh@936 -- # '[' -z 60333 ']' 00:14:31.141 09:42:54 -- common/autotest_common.sh@940 -- # kill -0 60333 00:14:31.141 09:42:54 -- common/autotest_common.sh@941 -- # uname 00:14:31.141 09:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.141 09:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60333 00:14:31.141 killing process with pid 60333 00:14:31.141 09:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.141 09:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.141 09:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60333' 00:14:31.141 09:42:54 -- common/autotest_common.sh@955 -- # kill 60333 00:14:31.141 09:42:54 -- common/autotest_common.sh@960 -- # wait 60333 00:14:32.077 spdk_app_start is called in Round 0. 00:14:32.077 Shutdown signal received, stop current app iteration 00:14:32.077 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:14:32.077 spdk_app_start is called in Round 1. 00:14:32.077 Shutdown signal received, stop current app iteration 00:14:32.077 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:14:32.077 spdk_app_start is called in Round 2. 00:14:32.077 Shutdown signal received, stop current app iteration 00:14:32.077 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:14:32.077 spdk_app_start is called in Round 3. 00:14:32.077 Shutdown signal received, stop current app iteration 00:14:32.077 09:42:55 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:14:32.077 09:42:55 -- event/event.sh@42 -- # return 0 00:14:32.077 00:14:32.077 real 0m20.932s 00:14:32.077 user 0m44.450s 00:14:32.077 sys 0m3.199s 00:14:32.077 09:42:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:32.077 09:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.077 ************************************ 00:14:32.077 END TEST app_repeat 00:14:32.077 ************************************ 00:14:32.336 09:42:55 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:14:32.336 09:42:55 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:32.336 09:42:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.336 09:42:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.336 09:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.336 ************************************ 00:14:32.336 START TEST cpu_locks 00:14:32.336 ************************************ 00:14:32.336 09:42:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:32.336 * Looking for test storage... 00:14:32.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:32.336 09:42:55 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:14:32.336 09:42:55 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:14:32.336 09:42:55 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:14:32.336 09:42:55 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:14:32.336 09:42:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.336 09:42:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.336 09:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.596 ************************************ 00:14:32.596 START TEST default_locks 00:14:32.596 ************************************ 00:14:32.596 09:42:55 -- common/autotest_common.sh@1111 -- # default_locks 00:14:32.596 09:42:55 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60800 00:14:32.596 09:42:55 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:32.596 09:42:55 -- event/cpu_locks.sh@47 -- # waitforlisten 60800 00:14:32.596 09:42:55 -- common/autotest_common.sh@817 -- # '[' -z 60800 ']' 00:14:32.596 09:42:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.596 09:42:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:32.596 09:42:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.596 09:42:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:32.596 09:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.596 [2024-04-18 09:42:55.935637] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:32.596 [2024-04-18 09:42:55.935813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:14:32.854 [2024-04-18 09:42:56.107858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.113 [2024-04-18 09:42:56.369909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.051 09:42:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:34.051 09:42:57 -- common/autotest_common.sh@850 -- # return 0 00:14:34.051 09:42:57 -- event/cpu_locks.sh@49 -- # locks_exist 60800 00:14:34.051 09:42:57 -- event/cpu_locks.sh@22 -- # lslocks -p 60800 00:14:34.051 09:42:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:34.310 09:42:57 -- event/cpu_locks.sh@50 -- # killprocess 60800 00:14:34.310 09:42:57 -- common/autotest_common.sh@936 -- # '[' -z 60800 ']' 00:14:34.310 09:42:57 -- common/autotest_common.sh@940 -- # kill -0 60800 00:14:34.310 09:42:57 -- common/autotest_common.sh@941 -- # uname 00:14:34.310 09:42:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.310 09:42:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60800 00:14:34.310 killing process with pid 60800 00:14:34.310 09:42:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.310 09:42:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.310 09:42:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60800' 00:14:34.310 09:42:57 -- common/autotest_common.sh@955 -- # kill 60800 00:14:34.310 09:42:57 -- common/autotest_common.sh@960 -- # wait 60800 00:14:36.845 09:42:59 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60800 00:14:36.845 09:42:59 -- common/autotest_common.sh@638 -- # local es=0 00:14:36.845 09:42:59 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60800 00:14:36.845 09:42:59 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:14:36.845 09:42:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.845 09:42:59 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:14:36.845 09:42:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.845 09:42:59 -- common/autotest_common.sh@641 -- # waitforlisten 60800 00:14:36.845 09:42:59 -- common/autotest_common.sh@817 -- # '[' -z 60800 ']' 00:14:36.845 09:42:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.845 09:42:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.845 09:42:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.845 09:42:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.845 09:42:59 -- common/autotest_common.sh@10 -- # set +x 00:14:36.845 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60800) - No such process 00:14:36.845 ERROR: process (pid: 60800) is no longer running 00:14:36.845 09:42:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.845 09:42:59 -- common/autotest_common.sh@850 -- # return 1 00:14:36.845 09:42:59 -- common/autotest_common.sh@641 -- # es=1 00:14:36.845 09:42:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:36.845 09:42:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:36.845 09:42:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:36.845 09:42:59 -- event/cpu_locks.sh@54 -- # no_locks 00:14:36.845 09:42:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:36.845 ************************************ 00:14:36.845 END TEST default_locks 00:14:36.845 ************************************ 00:14:36.845 09:42:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:14:36.845 09:42:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:36.845 00:14:36.845 real 0m4.098s 00:14:36.845 user 0m4.128s 00:14:36.845 sys 0m0.736s 00:14:36.845 09:42:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:36.845 09:42:59 -- common/autotest_common.sh@10 -- # set +x 00:14:36.845 09:42:59 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:14:36.845 09:42:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:36.845 09:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.845 09:42:59 -- common/autotest_common.sh@10 -- # set +x 00:14:36.845 ************************************ 00:14:36.845 START TEST default_locks_via_rpc 00:14:36.845 ************************************ 00:14:36.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.845 09:42:59 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:14:36.845 09:42:59 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60879 00:14:36.845 09:42:59 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:36.845 09:42:59 -- event/cpu_locks.sh@63 -- # waitforlisten 60879 00:14:36.845 09:42:59 -- common/autotest_common.sh@817 -- # '[' -z 60879 ']' 00:14:36.845 09:42:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.845 09:42:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.845 09:42:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.845 09:42:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.845 09:42:59 -- common/autotest_common.sh@10 -- # set +x 00:14:36.845 [2024-04-18 09:43:00.107414] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:36.845 [2024-04-18 09:43:00.107999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60879 ] 00:14:37.104 [2024-04-18 09:43:00.287745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.363 [2024-04-18 09:43:00.534186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.298 09:43:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.298 09:43:01 -- common/autotest_common.sh@850 -- # return 0 00:14:38.298 09:43:01 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:14:38.298 09:43:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.298 09:43:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.298 09:43:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.298 09:43:01 -- event/cpu_locks.sh@67 -- # no_locks 00:14:38.298 09:43:01 -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:38.298 09:43:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:14:38.298 09:43:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:38.298 09:43:01 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:14:38.298 09:43:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.298 09:43:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.298 09:43:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.298 09:43:01 -- event/cpu_locks.sh@71 -- # locks_exist 60879 00:14:38.298 09:43:01 -- event/cpu_locks.sh@22 -- # lslocks -p 60879 00:14:38.298 09:43:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:38.557 09:43:01 -- event/cpu_locks.sh@73 -- # killprocess 60879 00:14:38.557 09:43:01 -- common/autotest_common.sh@936 -- # '[' -z 60879 ']' 00:14:38.557 09:43:01 -- common/autotest_common.sh@940 -- # kill -0 60879 00:14:38.557 09:43:01 -- common/autotest_common.sh@941 -- # uname 00:14:38.557 09:43:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.557 09:43:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60879 00:14:38.557 killing process with pid 60879 00:14:38.557 09:43:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:38.557 09:43:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:38.557 09:43:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60879' 00:14:38.557 09:43:01 -- common/autotest_common.sh@955 -- # kill 60879 00:14:38.557 09:43:01 -- common/autotest_common.sh@960 -- # wait 60879 00:14:41.092 ************************************ 00:14:41.092 END TEST default_locks_via_rpc 00:14:41.092 ************************************ 00:14:41.092 00:14:41.092 real 0m4.087s 00:14:41.092 user 0m3.996s 00:14:41.092 sys 0m0.730s 00:14:41.092 09:43:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.092 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:14:41.092 09:43:04 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:14:41.092 09:43:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:41.092 09:43:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.092 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:14:41.092 ************************************ 00:14:41.092 START TEST non_locking_app_on_locked_coremask 00:14:41.092 ************************************ 00:14:41.092 09:43:04 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:14:41.092 09:43:04 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60962 00:14:41.092 09:43:04 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:41.092 09:43:04 -- event/cpu_locks.sh@81 -- # waitforlisten 60962 /var/tmp/spdk.sock 00:14:41.092 09:43:04 -- common/autotest_common.sh@817 -- # '[' -z 60962 ']' 00:14:41.092 09:43:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.092 09:43:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:41.092 09:43:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.092 09:43:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:41.092 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:14:41.092 [2024-04-18 09:43:04.322713] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:41.093 [2024-04-18 09:43:04.323295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60962 ] 00:14:41.352 [2024-04-18 09:43:04.503919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.611 [2024-04-18 09:43:04.781958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.546 09:43:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:42.546 09:43:05 -- common/autotest_common.sh@850 -- # return 0 00:14:42.546 09:43:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60984 00:14:42.546 09:43:05 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:14:42.546 09:43:05 -- event/cpu_locks.sh@85 -- # waitforlisten 60984 /var/tmp/spdk2.sock 00:14:42.546 09:43:05 -- common/autotest_common.sh@817 -- # '[' -z 60984 ']' 00:14:42.546 09:43:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:42.546 09:43:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.546 09:43:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:42.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:42.546 09:43:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.546 09:43:05 -- common/autotest_common.sh@10 -- # set +x 00:14:42.546 [2024-04-18 09:43:05.780431] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:42.546 [2024-04-18 09:43:05.780975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60984 ] 00:14:42.805 [2024-04-18 09:43:05.958120] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:42.805 [2024-04-18 09:43:05.962256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.373 [2024-04-18 09:43:06.515329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.276 09:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.276 09:43:08 -- common/autotest_common.sh@850 -- # return 0 00:14:45.276 09:43:08 -- event/cpu_locks.sh@87 -- # locks_exist 60962 00:14:45.276 09:43:08 -- event/cpu_locks.sh@22 -- # lslocks -p 60962 00:14:45.276 09:43:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:46.212 09:43:09 -- event/cpu_locks.sh@89 -- # killprocess 60962 00:14:46.212 09:43:09 -- common/autotest_common.sh@936 -- # '[' -z 60962 ']' 00:14:46.212 09:43:09 -- common/autotest_common.sh@940 -- # kill -0 60962 00:14:46.212 09:43:09 -- common/autotest_common.sh@941 -- # uname 00:14:46.212 09:43:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.212 09:43:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60962 00:14:46.212 killing process with pid 60962 00:14:46.212 09:43:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.212 09:43:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.212 09:43:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60962' 00:14:46.212 09:43:09 -- common/autotest_common.sh@955 -- # kill 60962 00:14:46.212 09:43:09 -- common/autotest_common.sh@960 -- # wait 60962 00:14:51.495 09:43:13 -- event/cpu_locks.sh@90 -- # killprocess 60984 00:14:51.495 09:43:13 -- common/autotest_common.sh@936 -- # '[' -z 60984 ']' 00:14:51.495 09:43:13 -- common/autotest_common.sh@940 -- # kill -0 60984 00:14:51.495 09:43:13 -- common/autotest_common.sh@941 -- # uname 00:14:51.495 09:43:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.495 09:43:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60984 00:14:51.495 killing process with pid 60984 00:14:51.495 09:43:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.495 09:43:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.495 09:43:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60984' 00:14:51.495 09:43:13 -- common/autotest_common.sh@955 -- # kill 60984 00:14:51.495 09:43:13 -- common/autotest_common.sh@960 -- # wait 60984 00:14:52.870 ************************************ 00:14:52.870 END TEST non_locking_app_on_locked_coremask 00:14:52.870 ************************************ 00:14:52.870 00:14:52.870 real 0m11.989s 00:14:52.870 user 0m12.367s 00:14:52.870 sys 0m1.549s 00:14:52.870 09:43:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.870 09:43:16 -- common/autotest_common.sh@10 -- # set +x 00:14:52.870 09:43:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:14:52.870 09:43:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:52.870 09:43:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.870 09:43:16 -- common/autotest_common.sh@10 -- # set +x 00:14:53.129 ************************************ 00:14:53.129 START TEST locking_app_on_unlocked_coremask 00:14:53.129 ************************************ 00:14:53.129 09:43:16 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:14:53.129 09:43:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61137 00:14:53.129 09:43:16 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:14:53.129 09:43:16 -- event/cpu_locks.sh@99 -- # waitforlisten 61137 /var/tmp/spdk.sock 00:14:53.129 09:43:16 -- common/autotest_common.sh@817 -- # '[' -z 61137 ']' 00:14:53.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.129 09:43:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.129 09:43:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:53.129 09:43:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.129 09:43:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:53.129 09:43:16 -- common/autotest_common.sh@10 -- # set +x 00:14:53.129 [2024-04-18 09:43:16.403869] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:53.129 [2024-04-18 09:43:16.404023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:14:53.387 [2024-04-18 09:43:16.565965] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:53.387 [2024-04-18 09:43:16.566094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.645 [2024-04-18 09:43:16.809371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.581 09:43:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:54.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:54.581 09:43:17 -- common/autotest_common.sh@850 -- # return 0 00:14:54.581 09:43:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61159 00:14:54.581 09:43:17 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:54.581 09:43:17 -- event/cpu_locks.sh@103 -- # waitforlisten 61159 /var/tmp/spdk2.sock 00:14:54.581 09:43:17 -- common/autotest_common.sh@817 -- # '[' -z 61159 ']' 00:14:54.581 09:43:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:54.581 09:43:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.581 09:43:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:54.581 09:43:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.581 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:14:54.581 [2024-04-18 09:43:17.763390] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:54.581 [2024-04-18 09:43:17.764008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61159 ] 00:14:54.581 [2024-04-18 09:43:17.940887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.148 [2024-04-18 09:43:18.506876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.680 09:43:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.680 09:43:20 -- common/autotest_common.sh@850 -- # return 0 00:14:57.680 09:43:20 -- event/cpu_locks.sh@105 -- # locks_exist 61159 00:14:57.680 09:43:20 -- event/cpu_locks.sh@22 -- # lslocks -p 61159 00:14:57.680 09:43:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:57.938 09:43:21 -- event/cpu_locks.sh@107 -- # killprocess 61137 00:14:57.938 09:43:21 -- common/autotest_common.sh@936 -- # '[' -z 61137 ']' 00:14:57.938 09:43:21 -- common/autotest_common.sh@940 -- # kill -0 61137 00:14:57.938 09:43:21 -- common/autotest_common.sh@941 -- # uname 00:14:58.197 09:43:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.197 09:43:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61137 00:14:58.197 killing process with pid 61137 00:14:58.197 09:43:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:58.197 09:43:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:58.197 09:43:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61137' 00:14:58.197 09:43:21 -- common/autotest_common.sh@955 -- # kill 61137 00:14:58.197 09:43:21 -- common/autotest_common.sh@960 -- # wait 61137 00:15:03.460 09:43:25 -- event/cpu_locks.sh@108 -- # killprocess 61159 00:15:03.460 09:43:25 -- common/autotest_common.sh@936 -- # '[' -z 61159 ']' 00:15:03.460 09:43:25 -- common/autotest_common.sh@940 -- # kill -0 61159 00:15:03.460 09:43:25 -- common/autotest_common.sh@941 -- # uname 00:15:03.460 09:43:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.460 09:43:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61159 00:15:03.460 killing process with pid 61159 00:15:03.460 09:43:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:03.460 09:43:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:03.460 09:43:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61159' 00:15:03.460 09:43:25 -- common/autotest_common.sh@955 -- # kill 61159 00:15:03.460 09:43:25 -- common/autotest_common.sh@960 -- # wait 61159 00:15:04.839 ************************************ 00:15:04.839 END TEST locking_app_on_unlocked_coremask 00:15:04.839 ************************************ 00:15:04.839 00:15:04.839 real 0m11.835s 00:15:04.839 user 0m12.297s 00:15:04.839 sys 0m1.454s 00:15:04.839 09:43:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.839 09:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:04.839 09:43:28 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:04.839 09:43:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:04.839 09:43:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.839 09:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:05.097 ************************************ 00:15:05.097 START TEST locking_app_on_locked_coremask 00:15:05.097 ************************************ 00:15:05.097 09:43:28 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:15:05.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.097 09:43:28 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61315 00:15:05.097 09:43:28 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:05.097 09:43:28 -- event/cpu_locks.sh@116 -- # waitforlisten 61315 /var/tmp/spdk.sock 00:15:05.097 09:43:28 -- common/autotest_common.sh@817 -- # '[' -z 61315 ']' 00:15:05.097 09:43:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.097 09:43:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.097 09:43:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.097 09:43:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.097 09:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:05.097 [2024-04-18 09:43:28.374995] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:05.097 [2024-04-18 09:43:28.375783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61315 ] 00:15:05.355 [2024-04-18 09:43:28.562598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.614 [2024-04-18 09:43:28.818886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.548 09:43:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:06.548 09:43:29 -- common/autotest_common.sh@850 -- # return 0 00:15:06.548 09:43:29 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61331 00:15:06.548 09:43:29 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61331 /var/tmp/spdk2.sock 00:15:06.548 09:43:29 -- common/autotest_common.sh@638 -- # local es=0 00:15:06.548 09:43:29 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:06.548 09:43:29 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 61331 /var/tmp/spdk2.sock 00:15:06.548 09:43:29 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:06.548 09:43:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:06.548 09:43:29 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:06.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:06.548 09:43:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:06.548 09:43:29 -- common/autotest_common.sh@641 -- # waitforlisten 61331 /var/tmp/spdk2.sock 00:15:06.548 09:43:29 -- common/autotest_common.sh@817 -- # '[' -z 61331 ']' 00:15:06.548 09:43:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:06.548 09:43:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.548 09:43:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:06.548 09:43:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.548 09:43:29 -- common/autotest_common.sh@10 -- # set +x 00:15:06.549 [2024-04-18 09:43:29.765394] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:06.549 [2024-04-18 09:43:29.765583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:15:06.807 [2024-04-18 09:43:29.948789] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61315 has claimed it. 00:15:06.807 [2024-04-18 09:43:29.948891] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:07.066 ERROR: process (pid: 61331) is no longer running 00:15:07.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (61331) - No such process 00:15:07.066 09:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:07.066 09:43:30 -- common/autotest_common.sh@850 -- # return 1 00:15:07.066 09:43:30 -- common/autotest_common.sh@641 -- # es=1 00:15:07.066 09:43:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:07.066 09:43:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:07.066 09:43:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:07.066 09:43:30 -- event/cpu_locks.sh@122 -- # locks_exist 61315 00:15:07.066 09:43:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:07.066 09:43:30 -- event/cpu_locks.sh@22 -- # lslocks -p 61315 00:15:07.635 09:43:30 -- event/cpu_locks.sh@124 -- # killprocess 61315 00:15:07.635 09:43:30 -- common/autotest_common.sh@936 -- # '[' -z 61315 ']' 00:15:07.635 09:43:30 -- common/autotest_common.sh@940 -- # kill -0 61315 00:15:07.635 09:43:30 -- common/autotest_common.sh@941 -- # uname 00:15:07.635 09:43:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.635 09:43:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61315 00:15:07.635 killing process with pid 61315 00:15:07.635 09:43:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.635 09:43:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.635 09:43:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61315' 00:15:07.635 09:43:30 -- common/autotest_common.sh@955 -- # kill 61315 00:15:07.635 09:43:30 -- common/autotest_common.sh@960 -- # wait 61315 00:15:10.165 00:15:10.165 real 0m5.047s 00:15:10.165 user 0m5.297s 00:15:10.165 sys 0m0.943s 00:15:10.165 ************************************ 00:15:10.165 END TEST locking_app_on_locked_coremask 00:15:10.165 ************************************ 00:15:10.165 09:43:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:10.165 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:15:10.165 09:43:33 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:15:10.165 09:43:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:10.165 09:43:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.165 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:15:10.165 ************************************ 00:15:10.165 START TEST locking_overlapped_coremask 00:15:10.165 ************************************ 00:15:10.165 09:43:33 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:15:10.165 09:43:33 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61410 00:15:10.165 09:43:33 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:10.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.165 09:43:33 -- event/cpu_locks.sh@133 -- # waitforlisten 61410 /var/tmp/spdk.sock 00:15:10.165 09:43:33 -- common/autotest_common.sh@817 -- # '[' -z 61410 ']' 00:15:10.165 09:43:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.165 09:43:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:10.165 09:43:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.165 09:43:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:10.165 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:15:10.165 [2024-04-18 09:43:33.534838] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:10.165 [2024-04-18 09:43:33.535262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61410 ] 00:15:10.423 [2024-04-18 09:43:33.699515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:10.681 [2024-04-18 09:43:33.992615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.681 [2024-04-18 09:43:33.992718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.681 [2024-04-18 09:43:33.992737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.615 09:43:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:11.615 09:43:34 -- common/autotest_common.sh@850 -- # return 0 00:15:11.615 09:43:34 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61434 00:15:11.615 09:43:34 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:15:11.615 09:43:34 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61434 /var/tmp/spdk2.sock 00:15:11.615 09:43:34 -- common/autotest_common.sh@638 -- # local es=0 00:15:11.615 09:43:34 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 61434 /var/tmp/spdk2.sock 00:15:11.615 09:43:34 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:11.615 09:43:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.615 09:43:34 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:11.615 09:43:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.615 09:43:34 -- common/autotest_common.sh@641 -- # waitforlisten 61434 /var/tmp/spdk2.sock 00:15:11.615 09:43:34 -- common/autotest_common.sh@817 -- # '[' -z 61434 ']' 00:15:11.615 09:43:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:11.615 09:43:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:11.615 09:43:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:11.615 09:43:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.615 09:43:34 -- common/autotest_common.sh@10 -- # set +x 00:15:11.615 [2024-04-18 09:43:34.906121] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:11.615 [2024-04-18 09:43:34.906314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61434 ] 00:15:11.880 [2024-04-18 09:43:35.100078] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61410 has claimed it. 00:15:11.880 [2024-04-18 09:43:35.103204] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:12.448 ERROR: process (pid: 61434) is no longer running 00:15:12.448 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (61434) - No such process 00:15:12.448 09:43:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.448 09:43:35 -- common/autotest_common.sh@850 -- # return 1 00:15:12.448 09:43:35 -- common/autotest_common.sh@641 -- # es=1 00:15:12.448 09:43:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:12.448 09:43:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:12.448 09:43:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:12.448 09:43:35 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:12.448 09:43:35 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:12.448 09:43:35 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:12.448 09:43:35 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:12.448 09:43:35 -- event/cpu_locks.sh@141 -- # killprocess 61410 00:15:12.448 09:43:35 -- common/autotest_common.sh@936 -- # '[' -z 61410 ']' 00:15:12.448 09:43:35 -- common/autotest_common.sh@940 -- # kill -0 61410 00:15:12.448 09:43:35 -- common/autotest_common.sh@941 -- # uname 00:15:12.448 09:43:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.448 09:43:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61410 00:15:12.448 09:43:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.448 09:43:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.448 09:43:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61410' 00:15:12.448 killing process with pid 61410 00:15:12.448 09:43:35 -- common/autotest_common.sh@955 -- # kill 61410 00:15:12.448 09:43:35 -- common/autotest_common.sh@960 -- # wait 61410 00:15:14.977 00:15:14.977 real 0m4.450s 00:15:14.977 user 0m11.515s 00:15:14.977 sys 0m0.674s 00:15:14.977 09:43:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.977 09:43:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.977 ************************************ 00:15:14.977 END TEST locking_overlapped_coremask 00:15:14.977 ************************************ 00:15:14.977 09:43:37 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:15:14.977 09:43:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:14.977 09:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.977 09:43:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.977 ************************************ 00:15:14.977 START TEST locking_overlapped_coremask_via_rpc 00:15:14.977 ************************************ 00:15:14.977 09:43:37 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:15:14.977 09:43:37 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61503 00:15:14.977 09:43:37 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:15:14.977 09:43:37 -- event/cpu_locks.sh@149 -- # waitforlisten 61503 /var/tmp/spdk.sock 00:15:14.977 09:43:37 -- common/autotest_common.sh@817 -- # '[' -z 61503 ']' 00:15:14.977 09:43:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.977 09:43:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:14.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.977 09:43:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.977 09:43:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:14.977 09:43:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.977 [2024-04-18 09:43:38.087400] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:14.977 [2024-04-18 09:43:38.087561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:15:14.977 [2024-04-18 09:43:38.250613] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:14.977 [2024-04-18 09:43:38.250740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.235 [2024-04-18 09:43:38.497896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.235 [2024-04-18 09:43:38.498055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.235 [2024-04-18 09:43:38.498100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:16.169 09:43:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:16.169 09:43:39 -- common/autotest_common.sh@850 -- # return 0 00:15:16.169 09:43:39 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61521 00:15:16.169 09:43:39 -- event/cpu_locks.sh@153 -- # waitforlisten 61521 /var/tmp/spdk2.sock 00:15:16.169 09:43:39 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:15:16.169 09:43:39 -- common/autotest_common.sh@817 -- # '[' -z 61521 ']' 00:15:16.169 09:43:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:16.169 09:43:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:16.169 09:43:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:16.169 09:43:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:16.169 09:43:39 -- common/autotest_common.sh@10 -- # set +x 00:15:16.169 [2024-04-18 09:43:39.476233] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:16.170 [2024-04-18 09:43:39.476458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61521 ] 00:15:16.428 [2024-04-18 09:43:39.676789] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:16.428 [2024-04-18 09:43:39.676908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.994 [2024-04-18 09:43:40.272958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.994 [2024-04-18 09:43:40.277174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:16.994 [2024-04-18 09:43:40.277186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.896 09:43:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.896 09:43:42 -- common/autotest_common.sh@850 -- # return 0 00:15:18.896 09:43:42 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:18.896 09:43:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.896 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:15:18.896 09:43:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.896 09:43:42 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:18.896 09:43:42 -- common/autotest_common.sh@638 -- # local es=0 00:15:18.896 09:43:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:18.896 09:43:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:18.896 09:43:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.896 09:43:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:18.896 09:43:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.896 09:43:42 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:18.896 09:43:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.896 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:15:18.896 [2024-04-18 09:43:42.144477] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61503 has claimed it. 00:15:18.896 request: 00:15:18.896 { 00:15:18.896 "method": "framework_enable_cpumask_locks", 00:15:18.896 "req_id": 1 00:15:18.896 } 00:15:18.896 Got JSON-RPC error response 00:15:18.896 response: 00:15:18.896 { 00:15:18.896 "code": -32603, 00:15:18.896 "message": "Failed to claim CPU core: 2" 00:15:18.896 } 00:15:18.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.896 09:43:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:18.896 09:43:42 -- common/autotest_common.sh@641 -- # es=1 00:15:18.896 09:43:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:18.896 09:43:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:18.896 09:43:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:18.896 09:43:42 -- event/cpu_locks.sh@158 -- # waitforlisten 61503 /var/tmp/spdk.sock 00:15:18.896 09:43:42 -- common/autotest_common.sh@817 -- # '[' -z 61503 ']' 00:15:18.896 09:43:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.896 09:43:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.896 09:43:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.896 09:43:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.896 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:15:19.154 09:43:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.154 09:43:42 -- common/autotest_common.sh@850 -- # return 0 00:15:19.154 09:43:42 -- event/cpu_locks.sh@159 -- # waitforlisten 61521 /var/tmp/spdk2.sock 00:15:19.154 09:43:42 -- common/autotest_common.sh@817 -- # '[' -z 61521 ']' 00:15:19.154 09:43:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:19.154 09:43:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:19.154 09:43:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:19.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:19.154 09:43:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:19.154 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:15:19.721 09:43:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.721 09:43:42 -- common/autotest_common.sh@850 -- # return 0 00:15:19.721 09:43:42 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:19.721 09:43:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:19.721 09:43:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:19.721 09:43:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:19.721 00:15:19.721 real 0m4.831s 00:15:19.721 user 0m1.637s 00:15:19.721 sys 0m0.236s 00:15:19.721 09:43:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.721 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:15:19.721 ************************************ 00:15:19.721 END TEST locking_overlapped_coremask_via_rpc 00:15:19.721 ************************************ 00:15:19.721 09:43:42 -- event/cpu_locks.sh@174 -- # cleanup 00:15:19.721 09:43:42 -- event/cpu_locks.sh@15 -- # [[ -z 61503 ]] 00:15:19.721 09:43:42 -- event/cpu_locks.sh@15 -- # killprocess 61503 00:15:19.721 09:43:42 -- common/autotest_common.sh@936 -- # '[' -z 61503 ']' 00:15:19.721 09:43:42 -- common/autotest_common.sh@940 -- # kill -0 61503 00:15:19.721 09:43:42 -- common/autotest_common.sh@941 -- # uname 00:15:19.721 09:43:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.721 09:43:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61503 00:15:19.721 killing process with pid 61503 00:15:19.721 09:43:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.721 09:43:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.721 09:43:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61503' 00:15:19.721 09:43:42 -- common/autotest_common.sh@955 -- # kill 61503 00:15:19.721 09:43:42 -- common/autotest_common.sh@960 -- # wait 61503 00:15:22.249 09:43:45 -- event/cpu_locks.sh@16 -- # [[ -z 61521 ]] 00:15:22.249 09:43:45 -- event/cpu_locks.sh@16 -- # killprocess 61521 00:15:22.249 09:43:45 -- common/autotest_common.sh@936 -- # '[' -z 61521 ']' 00:15:22.249 09:43:45 -- common/autotest_common.sh@940 -- # kill -0 61521 00:15:22.249 09:43:45 -- common/autotest_common.sh@941 -- # uname 00:15:22.249 09:43:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:22.249 09:43:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61521 00:15:22.249 killing process with pid 61521 00:15:22.249 09:43:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:22.249 09:43:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:22.249 09:43:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61521' 00:15:22.249 09:43:45 -- common/autotest_common.sh@955 -- # kill 61521 00:15:22.249 09:43:45 -- common/autotest_common.sh@960 -- # wait 61521 00:15:24.780 09:43:47 -- event/cpu_locks.sh@18 -- # rm -f 00:15:24.780 09:43:47 -- event/cpu_locks.sh@1 -- # cleanup 00:15:24.780 09:43:47 -- event/cpu_locks.sh@15 -- # [[ -z 61503 ]] 00:15:24.780 09:43:47 -- event/cpu_locks.sh@15 -- # killprocess 61503 00:15:24.780 09:43:47 -- common/autotest_common.sh@936 -- # '[' -z 61503 ']' 00:15:24.780 09:43:47 -- common/autotest_common.sh@940 -- # kill -0 61503 00:15:24.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (61503) - No such process 00:15:24.780 09:43:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 61503 is not found' 00:15:24.780 Process with pid 61503 is not found 00:15:24.780 09:43:47 -- event/cpu_locks.sh@16 -- # [[ -z 61521 ]] 00:15:24.780 Process with pid 61521 is not found 00:15:24.780 09:43:47 -- event/cpu_locks.sh@16 -- # killprocess 61521 00:15:24.780 09:43:47 -- common/autotest_common.sh@936 -- # '[' -z 61521 ']' 00:15:24.780 09:43:47 -- common/autotest_common.sh@940 -- # kill -0 61521 00:15:24.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (61521) - No such process 00:15:24.780 09:43:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 61521 is not found' 00:15:24.781 09:43:47 -- event/cpu_locks.sh@18 -- # rm -f 00:15:24.781 ************************************ 00:15:24.781 END TEST cpu_locks 00:15:24.781 ************************************ 00:15:24.781 00:15:24.781 real 0m52.014s 00:15:24.781 user 1m26.564s 00:15:24.781 sys 0m7.875s 00:15:24.781 09:43:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.781 09:43:47 -- common/autotest_common.sh@10 -- # set +x 00:15:24.781 ************************************ 00:15:24.781 END TEST event 00:15:24.781 ************************************ 00:15:24.781 00:15:24.781 real 1m25.024s 00:15:24.781 user 2m28.969s 00:15:24.781 sys 0m12.546s 00:15:24.781 09:43:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.781 09:43:47 -- common/autotest_common.sh@10 -- # set +x 00:15:24.781 09:43:47 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:24.781 09:43:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:24.781 09:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.781 09:43:47 -- common/autotest_common.sh@10 -- # set +x 00:15:24.781 ************************************ 00:15:24.781 START TEST thread 00:15:24.781 ************************************ 00:15:24.781 09:43:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:24.781 * Looking for test storage... 00:15:24.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:15:24.781 09:43:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:24.781 09:43:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:15:24.781 09:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.781 09:43:47 -- common/autotest_common.sh@10 -- # set +x 00:15:24.781 ************************************ 00:15:24.781 START TEST thread_poller_perf 00:15:24.781 ************************************ 00:15:24.781 09:43:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:24.781 [2024-04-18 09:43:47.986797] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:24.781 [2024-04-18 09:43:47.988024] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:15:25.040 [2024-04-18 09:43:48.181726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.299 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:25.299 [2024-04-18 09:43:48.470331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.676 ====================================== 00:15:26.676 busy:2215844042 (cyc) 00:15:26.676 total_run_count: 294000 00:15:26.676 tsc_hz: 2200000000 (cyc) 00:15:26.676 ====================================== 00:15:26.676 poller_cost: 7536 (cyc), 3425 (nsec) 00:15:26.676 ************************************ 00:15:26.676 END TEST thread_poller_perf 00:15:26.676 ************************************ 00:15:26.676 00:15:26.676 real 0m1.944s 00:15:26.676 user 0m1.695s 00:15:26.676 sys 0m0.134s 00:15:26.676 09:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.676 09:43:49 -- common/autotest_common.sh@10 -- # set +x 00:15:26.676 09:43:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:26.676 09:43:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:15:26.676 09:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.676 09:43:49 -- common/autotest_common.sh@10 -- # set +x 00:15:26.676 ************************************ 00:15:26.676 START TEST thread_poller_perf 00:15:26.676 ************************************ 00:15:26.676 09:43:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:26.676 [2024-04-18 09:43:50.038019] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:26.676 [2024-04-18 09:43:50.038222] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:15:26.935 [2024-04-18 09:43:50.203573] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.193 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:27.193 [2024-04-18 09:43:50.479919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.569 ====================================== 00:15:28.569 busy:2203870482 (cyc) 00:15:28.569 total_run_count: 3805000 00:15:28.569 tsc_hz: 2200000000 (cyc) 00:15:28.569 ====================================== 00:15:28.569 poller_cost: 579 (cyc), 263 (nsec) 00:15:28.569 00:15:28.569 real 0m1.868s 00:15:28.569 user 0m1.636s 00:15:28.569 sys 0m0.121s 00:15:28.569 09:43:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.569 ************************************ 00:15:28.569 END TEST thread_poller_perf 00:15:28.569 ************************************ 00:15:28.569 09:43:51 -- common/autotest_common.sh@10 -- # set +x 00:15:28.569 09:43:51 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:15:28.569 ************************************ 00:15:28.569 END TEST thread 00:15:28.569 ************************************ 00:15:28.569 00:15:28.569 real 0m4.142s 00:15:28.569 user 0m3.452s 00:15:28.569 sys 0m0.434s 00:15:28.569 09:43:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.569 09:43:51 -- common/autotest_common.sh@10 -- # set +x 00:15:28.569 09:43:51 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:15:28.569 09:43:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:28.569 09:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.569 09:43:51 -- common/autotest_common.sh@10 -- # set +x 00:15:28.827 ************************************ 00:15:28.827 START TEST accel 00:15:28.827 ************************************ 00:15:28.827 09:43:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:15:28.827 * Looking for test storage... 00:15:28.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:15:28.827 09:43:52 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:15:28.827 09:43:52 -- accel/accel.sh@82 -- # get_expected_opcs 00:15:28.827 09:43:52 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:28.827 09:43:52 -- accel/accel.sh@62 -- # spdk_tgt_pid=61858 00:15:28.827 09:43:52 -- accel/accel.sh@63 -- # waitforlisten 61858 00:15:28.827 09:43:52 -- common/autotest_common.sh@817 -- # '[' -z 61858 ']' 00:15:28.827 09:43:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.827 09:43:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:28.827 09:43:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.827 09:43:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:28.827 09:43:52 -- common/autotest_common.sh@10 -- # set +x 00:15:28.827 09:43:52 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:15:28.827 09:43:52 -- accel/accel.sh@61 -- # build_accel_config 00:15:28.827 09:43:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:28.827 09:43:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:28.827 09:43:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:28.827 09:43:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:28.827 09:43:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:28.827 09:43:52 -- accel/accel.sh@40 -- # local IFS=, 00:15:28.827 09:43:52 -- accel/accel.sh@41 -- # jq -r . 00:15:29.085 [2024-04-18 09:43:52.225970] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:29.085 [2024-04-18 09:43:52.226237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:15:29.085 [2024-04-18 09:43:52.403990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.343 [2024-04-18 09:43:52.647198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.280 09:43:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:30.280 09:43:53 -- common/autotest_common.sh@850 -- # return 0 00:15:30.280 09:43:53 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:15:30.280 09:43:53 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:15:30.280 09:43:53 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:15:30.280 09:43:53 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:15:30.280 09:43:53 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:15:30.280 09:43:53 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:15:30.280 09:43:53 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:15:30.280 09:43:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.280 09:43:53 -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 09:43:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.280 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.280 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.280 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.280 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.280 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.280 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.280 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.280 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.280 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.280 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # IFS== 00:15:30.281 09:43:53 -- accel/accel.sh@72 -- # read -r opc module 00:15:30.281 09:43:53 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:30.281 09:43:53 -- accel/accel.sh@75 -- # killprocess 61858 00:15:30.281 09:43:53 -- common/autotest_common.sh@936 -- # '[' -z 61858 ']' 00:15:30.281 09:43:53 -- common/autotest_common.sh@940 -- # kill -0 61858 00:15:30.281 09:43:53 -- common/autotest_common.sh@941 -- # uname 00:15:30.281 09:43:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.281 09:43:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61858 00:15:30.281 09:43:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:30.281 09:43:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:30.281 09:43:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61858' 00:15:30.281 killing process with pid 61858 00:15:30.281 09:43:53 -- common/autotest_common.sh@955 -- # kill 61858 00:15:30.281 09:43:53 -- common/autotest_common.sh@960 -- # wait 61858 00:15:32.815 09:43:55 -- accel/accel.sh@76 -- # trap - ERR 00:15:32.815 09:43:55 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:15:32.815 09:43:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:32.815 09:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:32.815 09:43:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.815 09:43:55 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:15:32.815 09:43:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:15:32.815 09:43:55 -- accel/accel.sh@12 -- # build_accel_config 00:15:32.815 09:43:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:32.815 09:43:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:32.815 09:43:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:32.815 09:43:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:32.815 09:43:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:32.815 09:43:55 -- accel/accel.sh@40 -- # local IFS=, 00:15:32.816 09:43:55 -- accel/accel.sh@41 -- # jq -r . 00:15:32.816 09:43:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:32.816 09:43:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.816 09:43:56 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:15:32.816 09:43:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:32.816 09:43:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:32.816 09:43:56 -- common/autotest_common.sh@10 -- # set +x 00:15:32.816 ************************************ 00:15:32.816 START TEST accel_missing_filename 00:15:32.816 ************************************ 00:15:32.816 09:43:56 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:15:32.816 09:43:56 -- common/autotest_common.sh@638 -- # local es=0 00:15:32.816 09:43:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:15:32.816 09:43:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:15:32.816 09:43:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:32.816 09:43:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:15:32.816 09:43:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:32.816 09:43:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:15:32.816 09:43:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:15:32.816 09:43:56 -- accel/accel.sh@12 -- # build_accel_config 00:15:32.816 09:43:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:32.816 09:43:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:32.816 09:43:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:32.816 09:43:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:32.816 09:43:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:32.816 09:43:56 -- accel/accel.sh@40 -- # local IFS=, 00:15:32.816 09:43:56 -- accel/accel.sh@41 -- # jq -r . 00:15:32.816 [2024-04-18 09:43:56.136354] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:32.816 [2024-04-18 09:43:56.136525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:15:33.074 [2024-04-18 09:43:56.313646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.332 [2024-04-18 09:43:56.560618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.591 [2024-04-18 09:43:56.777215] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:34.157 [2024-04-18 09:43:57.283979] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:15:34.415 A filename is required. 00:15:34.415 09:43:57 -- common/autotest_common.sh@641 -- # es=234 00:15:34.415 09:43:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:34.415 09:43:57 -- common/autotest_common.sh@650 -- # es=106 00:15:34.415 09:43:57 -- common/autotest_common.sh@651 -- # case "$es" in 00:15:34.415 09:43:57 -- common/autotest_common.sh@658 -- # es=1 00:15:34.415 09:43:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:34.415 ************************************ 00:15:34.415 END TEST accel_missing_filename 00:15:34.415 ************************************ 00:15:34.415 00:15:34.415 real 0m1.601s 00:15:34.415 user 0m1.346s 00:15:34.415 sys 0m0.199s 00:15:34.415 09:43:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:34.415 09:43:57 -- common/autotest_common.sh@10 -- # set +x 00:15:34.415 09:43:57 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:34.415 09:43:57 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:15:34.415 09:43:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.415 09:43:57 -- common/autotest_common.sh@10 -- # set +x 00:15:34.415 ************************************ 00:15:34.415 START TEST accel_compress_verify 00:15:34.415 ************************************ 00:15:34.415 09:43:57 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:34.415 09:43:57 -- common/autotest_common.sh@638 -- # local es=0 00:15:34.415 09:43:57 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:34.415 09:43:57 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:15:34.415 09:43:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:34.415 09:43:57 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:15:34.415 09:43:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:34.416 09:43:57 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:34.416 09:43:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:34.416 09:43:57 -- accel/accel.sh@12 -- # build_accel_config 00:15:34.416 09:43:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:34.416 09:43:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:34.416 09:43:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:34.416 09:43:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:34.416 09:43:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:34.416 09:43:57 -- accel/accel.sh@40 -- # local IFS=, 00:15:34.416 09:43:57 -- accel/accel.sh@41 -- # jq -r . 00:15:34.675 [2024-04-18 09:43:57.865335] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:34.675 [2024-04-18 09:43:57.865562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:15:34.675 [2024-04-18 09:43:58.042624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.934 [2024-04-18 09:43:58.295776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.192 [2024-04-18 09:43:58.503315] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:35.760 [2024-04-18 09:43:59.006547] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:15:36.018 00:15:36.018 Compression does not support the verify option, aborting. 00:15:36.018 09:43:59 -- common/autotest_common.sh@641 -- # es=161 00:15:36.018 09:43:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:36.018 09:43:59 -- common/autotest_common.sh@650 -- # es=33 00:15:36.018 09:43:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:15:36.018 09:43:59 -- common/autotest_common.sh@658 -- # es=1 00:15:36.018 09:43:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:36.018 00:15:36.018 real 0m1.590s 00:15:36.018 user 0m1.316s 00:15:36.018 sys 0m0.213s 00:15:36.018 09:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.018 ************************************ 00:15:36.018 END TEST accel_compress_verify 00:15:36.018 ************************************ 00:15:36.018 09:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 09:43:59 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:15:36.277 09:43:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:36.277 09:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.277 09:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 ************************************ 00:15:36.277 START TEST accel_wrong_workload 00:15:36.277 ************************************ 00:15:36.277 09:43:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:15:36.277 09:43:59 -- common/autotest_common.sh@638 -- # local es=0 00:15:36.277 09:43:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:15:36.277 09:43:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:15:36.277 09:43:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.277 09:43:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:15:36.277 09:43:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.277 09:43:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:15:36.277 09:43:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:15:36.277 09:43:59 -- accel/accel.sh@12 -- # build_accel_config 00:15:36.277 09:43:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:36.277 09:43:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:36.277 09:43:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:36.277 09:43:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:36.277 09:43:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:36.277 09:43:59 -- accel/accel.sh@40 -- # local IFS=, 00:15:36.277 09:43:59 -- accel/accel.sh@41 -- # jq -r . 00:15:36.277 Unsupported workload type: foobar 00:15:36.277 [2024-04-18 09:43:59.554944] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:15:36.277 accel_perf options: 00:15:36.277 [-h help message] 00:15:36.277 [-q queue depth per core] 00:15:36.277 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:36.277 [-T number of threads per core 00:15:36.277 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:36.277 [-t time in seconds] 00:15:36.277 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:36.277 [ dif_verify, , dif_generate, dif_generate_copy 00:15:36.277 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:36.277 [-l for compress/decompress workloads, name of uncompressed input file 00:15:36.277 [-S for crc32c workload, use this seed value (default 0) 00:15:36.277 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:36.277 [-f for fill workload, use this BYTE value (default 255) 00:15:36.277 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:36.277 [-y verify result if this switch is on] 00:15:36.277 [-a tasks to allocate per core (default: same value as -q)] 00:15:36.277 Can be used to spread operations across a wider range of memory. 00:15:36.277 ************************************ 00:15:36.277 END TEST accel_wrong_workload 00:15:36.277 ************************************ 00:15:36.277 09:43:59 -- common/autotest_common.sh@641 -- # es=1 00:15:36.277 09:43:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:36.277 09:43:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:36.277 09:43:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:36.277 00:15:36.277 real 0m0.068s 00:15:36.277 user 0m0.076s 00:15:36.277 sys 0m0.035s 00:15:36.277 09:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.277 09:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 09:43:59 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:15:36.277 09:43:59 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:15:36.277 09:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.277 09:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:36.536 ************************************ 00:15:36.536 START TEST accel_negative_buffers 00:15:36.536 ************************************ 00:15:36.536 09:43:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:15:36.536 09:43:59 -- common/autotest_common.sh@638 -- # local es=0 00:15:36.536 09:43:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:15:36.536 09:43:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:15:36.536 09:43:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.536 09:43:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:15:36.536 09:43:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.536 09:43:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:15:36.536 09:43:59 -- accel/accel.sh@12 -- # build_accel_config 00:15:36.536 09:43:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:15:36.536 09:43:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:36.536 09:43:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:36.536 09:43:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:36.536 09:43:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:36.536 09:43:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:36.536 09:43:59 -- accel/accel.sh@40 -- # local IFS=, 00:15:36.536 09:43:59 -- accel/accel.sh@41 -- # jq -r . 00:15:36.536 -x option must be non-negative. 00:15:36.536 [2024-04-18 09:43:59.732081] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:15:36.536 accel_perf options: 00:15:36.536 [-h help message] 00:15:36.536 [-q queue depth per core] 00:15:36.536 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:36.536 [-T number of threads per core 00:15:36.536 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:36.536 [-t time in seconds] 00:15:36.536 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:36.536 [ dif_verify, , dif_generate, dif_generate_copy 00:15:36.536 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:36.536 [-l for compress/decompress workloads, name of uncompressed input file 00:15:36.536 [-S for crc32c workload, use this seed value (default 0) 00:15:36.536 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:36.536 [-f for fill workload, use this BYTE value (default 255) 00:15:36.536 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:36.536 [-y verify result if this switch is on] 00:15:36.536 [-a tasks to allocate per core (default: same value as -q)] 00:15:36.537 Can be used to spread operations across a wider range of memory. 00:15:36.537 ************************************ 00:15:36.537 END TEST accel_negative_buffers 00:15:36.537 ************************************ 00:15:36.537 09:43:59 -- common/autotest_common.sh@641 -- # es=1 00:15:36.537 09:43:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:36.537 09:43:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:36.537 09:43:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:36.537 00:15:36.537 real 0m0.080s 00:15:36.537 user 0m0.092s 00:15:36.537 sys 0m0.043s 00:15:36.537 09:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.537 09:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:36.537 09:43:59 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:15:36.537 09:43:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:15:36.537 09:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.537 09:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:36.537 ************************************ 00:15:36.537 START TEST accel_crc32c 00:15:36.537 ************************************ 00:15:36.537 09:43:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:15:36.537 09:43:59 -- accel/accel.sh@16 -- # local accel_opc 00:15:36.537 09:43:59 -- accel/accel.sh@17 -- # local accel_module 00:15:36.537 09:43:59 -- accel/accel.sh@19 -- # IFS=: 00:15:36.537 09:43:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:15:36.537 09:43:59 -- accel/accel.sh@19 -- # read -r var val 00:15:36.537 09:43:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:15:36.537 09:43:59 -- accel/accel.sh@12 -- # build_accel_config 00:15:36.537 09:43:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:36.537 09:43:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:36.537 09:43:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:36.537 09:43:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:36.537 09:43:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:36.537 09:43:59 -- accel/accel.sh@40 -- # local IFS=, 00:15:36.537 09:43:59 -- accel/accel.sh@41 -- # jq -r . 00:15:36.796 [2024-04-18 09:43:59.935046] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:36.796 [2024-04-18 09:43:59.935220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62077 ] 00:15:36.796 [2024-04-18 09:44:00.100923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.054 [2024-04-18 09:44:00.384869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=0x1 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=crc32c 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=32 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=software 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@22 -- # accel_module=software 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=32 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=32 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=1 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val=Yes 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:37.313 09:44:00 -- accel/accel.sh@20 -- # val= 00:15:37.313 09:44:00 -- accel/accel.sh@21 -- # case "$var" in 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # IFS=: 00:15:37.313 09:44:00 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@20 -- # val= 00:15:39.215 09:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@20 -- # val= 00:15:39.215 09:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@20 -- # val= 00:15:39.215 09:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@20 -- # val= 00:15:39.215 09:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@20 -- # val= 00:15:39.215 09:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@20 -- # val= 00:15:39.215 09:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:39.215 09:44:02 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:15:39.215 09:44:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:39.215 00:15:39.215 real 0m2.586s 00:15:39.215 user 0m2.297s 00:15:39.215 sys 0m0.190s 00:15:39.215 09:44:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:39.215 ************************************ 00:15:39.215 END TEST accel_crc32c 00:15:39.215 ************************************ 00:15:39.215 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:15:39.215 09:44:02 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:15:39.215 09:44:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:15:39.215 09:44:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.215 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:15:39.215 ************************************ 00:15:39.215 START TEST accel_crc32c_C2 00:15:39.215 ************************************ 00:15:39.215 09:44:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:15:39.215 09:44:02 -- accel/accel.sh@16 -- # local accel_opc 00:15:39.215 09:44:02 -- accel/accel.sh@17 -- # local accel_module 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # IFS=: 00:15:39.215 09:44:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:15:39.215 09:44:02 -- accel/accel.sh@19 -- # read -r var val 00:15:39.215 09:44:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:15:39.215 09:44:02 -- accel/accel.sh@12 -- # build_accel_config 00:15:39.215 09:44:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:39.215 09:44:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:39.215 09:44:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:39.215 09:44:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:39.215 09:44:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:39.215 09:44:02 -- accel/accel.sh@40 -- # local IFS=, 00:15:39.215 09:44:02 -- accel/accel.sh@41 -- # jq -r . 00:15:39.474 [2024-04-18 09:44:02.639434] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:39.474 [2024-04-18 09:44:02.639587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62128 ] 00:15:39.474 [2024-04-18 09:44:02.805687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.773 [2024-04-18 09:44:03.073558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.031 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.031 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.031 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.031 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.031 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.031 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.031 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.031 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.031 09:44:03 -- accel/accel.sh@20 -- # val=0x1 00:15:40.031 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.031 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.031 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.031 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=crc32c 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=0 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=software 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@22 -- # accel_module=software 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=32 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=32 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=1 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val=Yes 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:40.032 09:44:03 -- accel/accel.sh@20 -- # val= 00:15:40.032 09:44:03 -- accel/accel.sh@21 -- # case "$var" in 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # IFS=: 00:15:40.032 09:44:03 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 09:44:05 -- accel/accel.sh@20 -- # val= 00:15:41.934 09:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 09:44:05 -- accel/accel.sh@20 -- # val= 00:15:41.934 09:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 09:44:05 -- accel/accel.sh@20 -- # val= 00:15:41.934 09:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 09:44:05 -- accel/accel.sh@20 -- # val= 00:15:41.934 09:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 09:44:05 -- accel/accel.sh@20 -- # val= 00:15:41.934 09:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 09:44:05 -- accel/accel.sh@20 -- # val= 00:15:41.934 09:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.934 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.934 ************************************ 00:15:41.934 09:44:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:41.934 09:44:05 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:15:41.934 09:44:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:41.934 00:15:41.934 real 0m2.616s 00:15:41.934 user 0m2.328s 00:15:41.934 sys 0m0.190s 00:15:41.934 09:44:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.934 09:44:05 -- common/autotest_common.sh@10 -- # set +x 00:15:41.934 END TEST accel_crc32c_C2 00:15:41.934 ************************************ 00:15:41.934 09:44:05 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:15:41.934 09:44:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:41.935 09:44:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.935 09:44:05 -- common/autotest_common.sh@10 -- # set +x 00:15:41.935 ************************************ 00:15:41.935 START TEST accel_copy 00:15:41.935 ************************************ 00:15:41.935 09:44:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:15:41.935 09:44:05 -- accel/accel.sh@16 -- # local accel_opc 00:15:41.935 09:44:05 -- accel/accel.sh@17 -- # local accel_module 00:15:41.935 09:44:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:15:41.935 09:44:05 -- accel/accel.sh@19 -- # IFS=: 00:15:41.935 09:44:05 -- accel/accel.sh@19 -- # read -r var val 00:15:41.935 09:44:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:15:41.935 09:44:05 -- accel/accel.sh@12 -- # build_accel_config 00:15:41.935 09:44:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:41.935 09:44:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:41.935 09:44:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:41.935 09:44:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:41.935 09:44:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:41.935 09:44:05 -- accel/accel.sh@40 -- # local IFS=, 00:15:41.935 09:44:05 -- accel/accel.sh@41 -- # jq -r . 00:15:42.193 [2024-04-18 09:44:05.378042] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:42.193 [2024-04-18 09:44:05.378263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62178 ] 00:15:42.193 [2024-04-18 09:44:05.556045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.451 [2024-04-18 09:44:05.833259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val=0x1 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val=copy 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@23 -- # accel_opc=copy 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val=software 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@22 -- # accel_module=software 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val=32 00:15:42.710 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.710 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.710 09:44:06 -- accel/accel.sh@20 -- # val=32 00:15:42.711 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.711 09:44:06 -- accel/accel.sh@20 -- # val=1 00:15:42.711 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.711 09:44:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:42.711 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.711 09:44:06 -- accel/accel.sh@20 -- # val=Yes 00:15:42.711 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.711 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.711 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:42.711 09:44:06 -- accel/accel.sh@20 -- # val= 00:15:42.711 09:44:06 -- accel/accel.sh@21 -- # case "$var" in 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # IFS=: 00:15:42.711 09:44:06 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 09:44:07 -- accel/accel.sh@20 -- # val= 00:15:44.614 09:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # IFS=: 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 09:44:07 -- accel/accel.sh@20 -- # val= 00:15:44.614 09:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # IFS=: 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 09:44:07 -- accel/accel.sh@20 -- # val= 00:15:44.614 09:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # IFS=: 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 09:44:07 -- accel/accel.sh@20 -- # val= 00:15:44.614 09:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # IFS=: 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 09:44:07 -- accel/accel.sh@20 -- # val= 00:15:44.614 09:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # IFS=: 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 09:44:07 -- accel/accel.sh@20 -- # val= 00:15:44.614 09:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # IFS=: 00:15:44.614 09:44:07 -- accel/accel.sh@19 -- # read -r var val 00:15:44.614 ************************************ 00:15:44.614 END TEST accel_copy 00:15:44.614 ************************************ 00:15:44.614 09:44:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:44.614 09:44:07 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:15:44.614 09:44:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:44.614 00:15:44.614 real 0m2.592s 00:15:44.614 user 0m2.293s 00:15:44.614 sys 0m0.202s 00:15:44.614 09:44:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:44.614 09:44:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.614 09:44:07 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:44.614 09:44:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:44.614 09:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.614 09:44:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.873 ************************************ 00:15:44.873 START TEST accel_fill 00:15:44.873 ************************************ 00:15:44.873 09:44:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:44.873 09:44:08 -- accel/accel.sh@16 -- # local accel_opc 00:15:44.873 09:44:08 -- accel/accel.sh@17 -- # local accel_module 00:15:44.873 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:44.873 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:44.873 09:44:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:44.873 09:44:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:44.873 09:44:08 -- accel/accel.sh@12 -- # build_accel_config 00:15:44.873 09:44:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:44.873 09:44:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:44.873 09:44:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:44.873 09:44:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:44.873 09:44:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:44.873 09:44:08 -- accel/accel.sh@40 -- # local IFS=, 00:15:44.873 09:44:08 -- accel/accel.sh@41 -- # jq -r . 00:15:44.873 [2024-04-18 09:44:08.084422] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:44.873 [2024-04-18 09:44:08.085359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62230 ] 00:15:45.181 [2024-04-18 09:44:08.283258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.181 [2024-04-18 09:44:08.565512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=0x1 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=fill 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@23 -- # accel_opc=fill 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=0x80 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=software 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@22 -- # accel_module=software 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=64 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=64 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=1 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val=Yes 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:45.441 09:44:08 -- accel/accel.sh@20 -- # val= 00:15:45.441 09:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # IFS=: 00:15:45.441 09:44:08 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@20 -- # val= 00:15:47.343 09:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@20 -- # val= 00:15:47.343 09:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@20 -- # val= 00:15:47.343 09:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@20 -- # val= 00:15:47.343 09:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@20 -- # val= 00:15:47.343 09:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@20 -- # val= 00:15:47.343 09:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.343 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.343 09:44:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:47.343 09:44:10 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:15:47.343 09:44:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:47.343 00:15:47.343 real 0m2.642s 00:15:47.343 user 0m2.318s 00:15:47.343 sys 0m0.219s 00:15:47.343 09:44:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.343 ************************************ 00:15:47.343 END TEST accel_fill 00:15:47.343 ************************************ 00:15:47.343 09:44:10 -- common/autotest_common.sh@10 -- # set +x 00:15:47.343 09:44:10 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:15:47.343 09:44:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:47.343 09:44:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.343 09:44:10 -- common/autotest_common.sh@10 -- # set +x 00:15:47.602 ************************************ 00:15:47.602 START TEST accel_copy_crc32c 00:15:47.602 ************************************ 00:15:47.602 09:44:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:15:47.602 09:44:10 -- accel/accel.sh@16 -- # local accel_opc 00:15:47.602 09:44:10 -- accel/accel.sh@17 -- # local accel_module 00:15:47.602 09:44:10 -- accel/accel.sh@19 -- # IFS=: 00:15:47.602 09:44:10 -- accel/accel.sh@19 -- # read -r var val 00:15:47.602 09:44:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:15:47.602 09:44:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:15:47.602 09:44:10 -- accel/accel.sh@12 -- # build_accel_config 00:15:47.602 09:44:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:47.602 09:44:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:47.602 09:44:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:47.602 09:44:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:47.602 09:44:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:47.602 09:44:10 -- accel/accel.sh@40 -- # local IFS=, 00:15:47.602 09:44:10 -- accel/accel.sh@41 -- # jq -r . 00:15:47.602 [2024-04-18 09:44:10.838504] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:47.602 [2024-04-18 09:44:10.838725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62285 ] 00:15:47.861 [2024-04-18 09:44:11.020184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.119 [2024-04-18 09:44:11.270843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.119 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.119 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.119 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.119 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.119 09:44:11 -- accel/accel.sh@20 -- # val=0x1 00:15:48.119 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.119 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.119 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.119 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.119 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.119 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=copy_crc32c 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=0 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=software 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@22 -- # accel_module=software 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=32 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=32 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=1 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val=Yes 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:48.120 09:44:11 -- accel/accel.sh@20 -- # val= 00:15:48.120 09:44:11 -- accel/accel.sh@21 -- # case "$var" in 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # IFS=: 00:15:48.120 09:44:11 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@20 -- # val= 00:15:50.047 09:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@20 -- # val= 00:15:50.047 09:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@20 -- # val= 00:15:50.047 09:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@20 -- # val= 00:15:50.047 09:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@20 -- # val= 00:15:50.047 09:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@20 -- # val= 00:15:50.047 09:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.047 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.047 09:44:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:50.047 09:44:13 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:15:50.047 ************************************ 00:15:50.047 END TEST accel_copy_crc32c 00:15:50.047 ************************************ 00:15:50.047 09:44:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:50.047 00:15:50.047 real 0m2.580s 00:15:50.047 user 0m2.262s 00:15:50.047 sys 0m0.225s 00:15:50.047 09:44:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:50.047 09:44:13 -- common/autotest_common.sh@10 -- # set +x 00:15:50.047 09:44:13 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:15:50.047 09:44:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:15:50.047 09:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.047 09:44:13 -- common/autotest_common.sh@10 -- # set +x 00:15:50.306 ************************************ 00:15:50.306 START TEST accel_copy_crc32c_C2 00:15:50.306 ************************************ 00:15:50.306 09:44:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:15:50.306 09:44:13 -- accel/accel.sh@16 -- # local accel_opc 00:15:50.306 09:44:13 -- accel/accel.sh@17 -- # local accel_module 00:15:50.306 09:44:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:15:50.306 09:44:13 -- accel/accel.sh@19 -- # IFS=: 00:15:50.306 09:44:13 -- accel/accel.sh@19 -- # read -r var val 00:15:50.306 09:44:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:15:50.306 09:44:13 -- accel/accel.sh@12 -- # build_accel_config 00:15:50.306 09:44:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:50.306 09:44:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:50.306 09:44:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:50.306 09:44:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:50.306 09:44:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:50.306 09:44:13 -- accel/accel.sh@40 -- # local IFS=, 00:15:50.306 09:44:13 -- accel/accel.sh@41 -- # jq -r . 00:15:50.306 [2024-04-18 09:44:13.530268] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:50.306 [2024-04-18 09:44:13.530407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62335 ] 00:15:50.306 [2024-04-18 09:44:13.695162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.565 [2024-04-18 09:44:13.941985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=0x1 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=copy_crc32c 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=0 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val='8192 bytes' 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=software 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@22 -- # accel_module=software 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=32 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=32 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=1 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val=Yes 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:50.825 09:44:14 -- accel/accel.sh@20 -- # val= 00:15:50.825 09:44:14 -- accel/accel.sh@21 -- # case "$var" in 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # IFS=: 00:15:50.825 09:44:14 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:52.782 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:52.782 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:52.782 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:52.782 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:52.782 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:52.782 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:52.782 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:52.782 09:44:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:52.782 09:44:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:15:52.782 09:44:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:52.782 00:15:52.782 real 0m2.591s 00:15:52.782 user 0m2.295s 00:15:52.782 sys 0m0.196s 00:15:52.782 09:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.782 09:44:16 -- common/autotest_common.sh@10 -- # set +x 00:15:52.782 ************************************ 00:15:52.782 END TEST accel_copy_crc32c_C2 00:15:52.782 ************************************ 00:15:52.782 09:44:16 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:15:52.782 09:44:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:52.782 09:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.782 09:44:16 -- common/autotest_common.sh@10 -- # set +x 00:15:53.041 ************************************ 00:15:53.041 START TEST accel_dualcast 00:15:53.041 ************************************ 00:15:53.041 09:44:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:15:53.041 09:44:16 -- accel/accel.sh@16 -- # local accel_opc 00:15:53.041 09:44:16 -- accel/accel.sh@17 -- # local accel_module 00:15:53.041 09:44:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:15:53.041 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.041 09:44:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:15:53.041 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.041 09:44:16 -- accel/accel.sh@12 -- # build_accel_config 00:15:53.041 09:44:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:53.041 09:44:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:53.041 09:44:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:53.041 09:44:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:53.041 09:44:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:53.041 09:44:16 -- accel/accel.sh@40 -- # local IFS=, 00:15:53.041 09:44:16 -- accel/accel.sh@41 -- # jq -r . 00:15:53.041 [2024-04-18 09:44:16.265168] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:53.041 [2024-04-18 09:44:16.265450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62386 ] 00:15:53.300 [2024-04-18 09:44:16.455732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.559 [2024-04-18 09:44:16.747773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=0x1 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=dualcast 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=software 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@22 -- # accel_module=software 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=32 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=32 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=1 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val=Yes 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:53.818 09:44:16 -- accel/accel.sh@20 -- # val= 00:15:53.818 09:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # IFS=: 00:15:53.818 09:44:16 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@20 -- # val= 00:15:55.722 09:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@20 -- # val= 00:15:55.722 09:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@20 -- # val= 00:15:55.722 09:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@20 -- # val= 00:15:55.722 09:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@20 -- # val= 00:15:55.722 09:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@20 -- # val= 00:15:55.722 09:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.722 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.722 09:44:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:55.722 09:44:18 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:15:55.722 09:44:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:55.722 00:15:55.723 real 0m2.669s 00:15:55.723 user 0m2.334s 00:15:55.723 sys 0m0.235s 00:15:55.723 09:44:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:55.723 09:44:18 -- common/autotest_common.sh@10 -- # set +x 00:15:55.723 ************************************ 00:15:55.723 END TEST accel_dualcast 00:15:55.723 ************************************ 00:15:55.723 09:44:18 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:15:55.723 09:44:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:55.723 09:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:55.723 09:44:18 -- common/autotest_common.sh@10 -- # set +x 00:15:55.723 ************************************ 00:15:55.723 START TEST accel_compare 00:15:55.723 ************************************ 00:15:55.723 09:44:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:15:55.723 09:44:18 -- accel/accel.sh@16 -- # local accel_opc 00:15:55.723 09:44:18 -- accel/accel.sh@17 -- # local accel_module 00:15:55.723 09:44:18 -- accel/accel.sh@19 -- # IFS=: 00:15:55.723 09:44:18 -- accel/accel.sh@19 -- # read -r var val 00:15:55.723 09:44:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:15:55.723 09:44:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:15:55.723 09:44:18 -- accel/accel.sh@12 -- # build_accel_config 00:15:55.723 09:44:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:55.723 09:44:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:55.723 09:44:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:55.723 09:44:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:55.723 09:44:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:55.723 09:44:18 -- accel/accel.sh@40 -- # local IFS=, 00:15:55.723 09:44:18 -- accel/accel.sh@41 -- # jq -r . 00:15:55.723 [2024-04-18 09:44:19.027633] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:55.723 [2024-04-18 09:44:19.027828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62442 ] 00:15:55.981 [2024-04-18 09:44:19.205024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.242 [2024-04-18 09:44:19.499138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=0x1 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=compare 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@23 -- # accel_opc=compare 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=software 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@22 -- # accel_module=software 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=32 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=32 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=1 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val=Yes 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:56.508 09:44:19 -- accel/accel.sh@20 -- # val= 00:15:56.508 09:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # IFS=: 00:15:56.508 09:44:19 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@20 -- # val= 00:15:58.411 09:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@20 -- # val= 00:15:58.411 09:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@20 -- # val= 00:15:58.411 09:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@20 -- # val= 00:15:58.411 09:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@20 -- # val= 00:15:58.411 09:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@20 -- # val= 00:15:58.411 09:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:58.411 09:44:21 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:15:58.411 09:44:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:58.411 00:15:58.411 real 0m2.632s 00:15:58.411 user 0m2.331s 00:15:58.411 sys 0m0.201s 00:15:58.411 09:44:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.411 09:44:21 -- common/autotest_common.sh@10 -- # set +x 00:15:58.411 ************************************ 00:15:58.411 END TEST accel_compare 00:15:58.411 ************************************ 00:15:58.411 09:44:21 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:15:58.411 09:44:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:15:58.411 09:44:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.411 09:44:21 -- common/autotest_common.sh@10 -- # set +x 00:15:58.411 ************************************ 00:15:58.411 START TEST accel_xor 00:15:58.411 ************************************ 00:15:58.411 09:44:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:15:58.411 09:44:21 -- accel/accel.sh@16 -- # local accel_opc 00:15:58.411 09:44:21 -- accel/accel.sh@17 -- # local accel_module 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # IFS=: 00:15:58.411 09:44:21 -- accel/accel.sh@19 -- # read -r var val 00:15:58.411 09:44:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:15:58.411 09:44:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:15:58.411 09:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:15:58.411 09:44:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:58.411 09:44:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:58.411 09:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:58.411 09:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:58.411 09:44:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:58.411 09:44:21 -- accel/accel.sh@40 -- # local IFS=, 00:15:58.411 09:44:21 -- accel/accel.sh@41 -- # jq -r . 00:15:58.411 [2024-04-18 09:44:21.777071] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:58.411 [2024-04-18 09:44:21.778012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62493 ] 00:15:58.669 [2024-04-18 09:44:21.941032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.927 [2024-04-18 09:44:22.195401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.185 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.185 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.185 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.185 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.185 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.185 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.185 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.185 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.185 09:44:22 -- accel/accel.sh@20 -- # val=0x1 00:15:59.185 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.185 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.185 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.185 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=xor 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@23 -- # accel_opc=xor 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=2 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=software 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@22 -- # accel_module=software 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=32 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=32 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=1 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val=Yes 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:15:59.186 09:44:22 -- accel/accel.sh@20 -- # val= 00:15:59.186 09:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # IFS=: 00:15:59.186 09:44:22 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@20 -- # val= 00:16:01.087 09:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@20 -- # val= 00:16:01.087 09:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@20 -- # val= 00:16:01.087 09:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@20 -- # val= 00:16:01.087 09:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@20 -- # val= 00:16:01.087 09:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@20 -- # val= 00:16:01.087 09:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:01.087 09:44:24 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:01.087 09:44:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:01.087 00:16:01.087 real 0m2.557s 00:16:01.087 user 0m2.263s 00:16:01.087 sys 0m0.196s 00:16:01.087 09:44:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:01.087 ************************************ 00:16:01.087 END TEST accel_xor 00:16:01.087 ************************************ 00:16:01.087 09:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.087 09:44:24 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:01.087 09:44:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:01.087 09:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.087 09:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.087 ************************************ 00:16:01.087 START TEST accel_xor 00:16:01.087 ************************************ 00:16:01.087 09:44:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:16:01.087 09:44:24 -- accel/accel.sh@16 -- # local accel_opc 00:16:01.087 09:44:24 -- accel/accel.sh@17 -- # local accel_module 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # IFS=: 00:16:01.087 09:44:24 -- accel/accel.sh@19 -- # read -r var val 00:16:01.087 09:44:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:01.087 09:44:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:16:01.087 09:44:24 -- accel/accel.sh@12 -- # build_accel_config 00:16:01.087 09:44:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:01.087 09:44:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:01.087 09:44:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:01.087 09:44:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:01.087 09:44:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:01.087 09:44:24 -- accel/accel.sh@40 -- # local IFS=, 00:16:01.087 09:44:24 -- accel/accel.sh@41 -- # jq -r . 00:16:01.087 [2024-04-18 09:44:24.460781] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:01.087 [2024-04-18 09:44:24.460960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62544 ] 00:16:01.346 [2024-04-18 09:44:24.635619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.605 [2024-04-18 09:44:24.875385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=0x1 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=xor 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=3 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=software 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@22 -- # accel_module=software 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=32 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=32 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=1 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val=Yes 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:01.864 09:44:25 -- accel/accel.sh@20 -- # val= 00:16:01.864 09:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # IFS=: 00:16:01.864 09:44:25 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@20 -- # val= 00:16:03.792 09:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@20 -- # val= 00:16:03.792 09:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@20 -- # val= 00:16:03.792 09:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@20 -- # val= 00:16:03.792 09:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@20 -- # val= 00:16:03.792 09:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@20 -- # val= 00:16:03.792 09:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:26 -- accel/accel.sh@19 -- # read -r var val 00:16:03.792 09:44:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:03.792 09:44:26 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:03.792 09:44:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:03.792 00:16:03.792 real 0m2.555s 00:16:03.792 user 0m2.246s 00:16:03.792 sys 0m0.207s 00:16:03.792 09:44:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.792 ************************************ 00:16:03.792 END TEST accel_xor 00:16:03.792 ************************************ 00:16:03.792 09:44:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.792 09:44:26 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:03.792 09:44:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:03.792 09:44:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.792 09:44:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.792 ************************************ 00:16:03.792 START TEST accel_dif_verify 00:16:03.792 ************************************ 00:16:03.792 09:44:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:16:03.792 09:44:27 -- accel/accel.sh@16 -- # local accel_opc 00:16:03.792 09:44:27 -- accel/accel.sh@17 -- # local accel_module 00:16:03.792 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:03.792 09:44:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:03.792 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:03.793 09:44:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:16:03.793 09:44:27 -- accel/accel.sh@12 -- # build_accel_config 00:16:03.793 09:44:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:03.793 09:44:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:03.793 09:44:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:03.793 09:44:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:03.793 09:44:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:03.793 09:44:27 -- accel/accel.sh@40 -- # local IFS=, 00:16:03.793 09:44:27 -- accel/accel.sh@41 -- # jq -r . 00:16:03.793 [2024-04-18 09:44:27.146087] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:03.793 [2024-04-18 09:44:27.146438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62596 ] 00:16:04.051 [2024-04-18 09:44:27.324338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.310 [2024-04-18 09:44:27.621752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=0x1 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=dif_verify 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=software 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@22 -- # accel_module=software 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=32 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=32 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=1 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val=No 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:04.569 09:44:27 -- accel/accel.sh@20 -- # val= 00:16:04.569 09:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # IFS=: 00:16:04.569 09:44:27 -- accel/accel.sh@19 -- # read -r var val 00:16:06.473 09:44:29 -- accel/accel.sh@20 -- # val= 00:16:06.473 09:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.473 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@20 -- # val= 00:16:06.474 09:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@20 -- # val= 00:16:06.474 09:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@20 -- # val= 00:16:06.474 09:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@20 -- # val= 00:16:06.474 09:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@20 -- # val= 00:16:06.474 09:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:06.474 09:44:29 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:06.474 09:44:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:06.474 00:16:06.474 real 0m2.635s 00:16:06.474 user 0m2.317s 00:16:06.474 sys 0m0.217s 00:16:06.474 09:44:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.474 ************************************ 00:16:06.474 END TEST accel_dif_verify 00:16:06.474 ************************************ 00:16:06.474 09:44:29 -- common/autotest_common.sh@10 -- # set +x 00:16:06.474 09:44:29 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:06.474 09:44:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:06.474 09:44:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.474 09:44:29 -- common/autotest_common.sh@10 -- # set +x 00:16:06.474 ************************************ 00:16:06.474 START TEST accel_dif_generate 00:16:06.474 ************************************ 00:16:06.474 09:44:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:16:06.474 09:44:29 -- accel/accel.sh@16 -- # local accel_opc 00:16:06.474 09:44:29 -- accel/accel.sh@17 -- # local accel_module 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # IFS=: 00:16:06.474 09:44:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:06.474 09:44:29 -- accel/accel.sh@19 -- # read -r var val 00:16:06.474 09:44:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:16:06.474 09:44:29 -- accel/accel.sh@12 -- # build_accel_config 00:16:06.474 09:44:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:06.474 09:44:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:06.474 09:44:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:06.474 09:44:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:06.474 09:44:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:06.474 09:44:29 -- accel/accel.sh@40 -- # local IFS=, 00:16:06.474 09:44:29 -- accel/accel.sh@41 -- # jq -r . 00:16:06.732 [2024-04-18 09:44:29.902276] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:06.732 [2024-04-18 09:44:29.902454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62647 ] 00:16:06.732 [2024-04-18 09:44:30.077548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.990 [2024-04-18 09:44:30.318430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=0x1 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=dif_generate 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=software 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@22 -- # accel_module=software 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=32 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=32 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=1 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val=No 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:07.248 09:44:30 -- accel/accel.sh@20 -- # val= 00:16:07.248 09:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # IFS=: 00:16:07.248 09:44:30 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@20 -- # val= 00:16:09.158 09:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@20 -- # val= 00:16:09.158 09:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@20 -- # val= 00:16:09.158 09:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@20 -- # val= 00:16:09.158 09:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@20 -- # val= 00:16:09.158 09:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@20 -- # val= 00:16:09.158 09:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:09.158 09:44:32 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:09.158 09:44:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:09.158 00:16:09.158 real 0m2.518s 00:16:09.158 user 0m2.225s 00:16:09.158 sys 0m0.195s 00:16:09.158 09:44:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:09.158 ************************************ 00:16:09.158 END TEST accel_dif_generate 00:16:09.158 ************************************ 00:16:09.158 09:44:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.158 09:44:32 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:09.158 09:44:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:09.158 09:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.158 09:44:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.158 ************************************ 00:16:09.158 START TEST accel_dif_generate_copy 00:16:09.158 ************************************ 00:16:09.158 09:44:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:16:09.158 09:44:32 -- accel/accel.sh@16 -- # local accel_opc 00:16:09.158 09:44:32 -- accel/accel.sh@17 -- # local accel_module 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # IFS=: 00:16:09.158 09:44:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:09.158 09:44:32 -- accel/accel.sh@19 -- # read -r var val 00:16:09.158 09:44:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:16:09.158 09:44:32 -- accel/accel.sh@12 -- # build_accel_config 00:16:09.158 09:44:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:09.158 09:44:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:09.158 09:44:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:09.158 09:44:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:09.159 09:44:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:09.159 09:44:32 -- accel/accel.sh@40 -- # local IFS=, 00:16:09.159 09:44:32 -- accel/accel.sh@41 -- # jq -r . 00:16:09.159 [2024-04-18 09:44:32.540885] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:09.159 [2024-04-18 09:44:32.541069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62698 ] 00:16:09.417 [2024-04-18 09:44:32.716389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.676 [2024-04-18 09:44:33.009872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=0x1 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=software 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@22 -- # accel_module=software 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=32 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=32 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=1 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val=No 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:09.933 09:44:33 -- accel/accel.sh@20 -- # val= 00:16:09.933 09:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # IFS=: 00:16:09.933 09:44:33 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:11.834 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:11.834 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:11.834 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:11.834 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:11.834 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:11.834 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:11.834 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:11.834 09:44:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:11.834 09:44:35 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:16:11.834 09:44:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:11.834 00:16:11.834 real 0m2.647s 00:16:11.834 user 0m2.350s 00:16:11.834 sys 0m0.198s 00:16:11.834 09:44:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.834 ************************************ 00:16:11.834 END TEST accel_dif_generate_copy 00:16:11.834 ************************************ 00:16:11.834 09:44:35 -- common/autotest_common.sh@10 -- # set +x 00:16:11.834 09:44:35 -- accel/accel.sh@115 -- # [[ y == y ]] 00:16:11.834 09:44:35 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:11.834 09:44:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:11.834 09:44:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.834 09:44:35 -- common/autotest_common.sh@10 -- # set +x 00:16:12.093 ************************************ 00:16:12.093 START TEST accel_comp 00:16:12.093 ************************************ 00:16:12.093 09:44:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:12.093 09:44:35 -- accel/accel.sh@16 -- # local accel_opc 00:16:12.093 09:44:35 -- accel/accel.sh@17 -- # local accel_module 00:16:12.093 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.093 09:44:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:12.093 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.093 09:44:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:12.093 09:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:16:12.093 09:44:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.093 09:44:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.093 09:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.093 09:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.093 09:44:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.093 09:44:35 -- accel/accel.sh@40 -- # local IFS=, 00:16:12.093 09:44:35 -- accel/accel.sh@41 -- # jq -r . 00:16:12.093 [2024-04-18 09:44:35.303841] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:12.093 [2024-04-18 09:44:35.303968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62754 ] 00:16:12.093 [2024-04-18 09:44:35.474256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.351 [2024-04-18 09:44:35.698745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=0x1 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=compress 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@23 -- # accel_opc=compress 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=software 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@22 -- # accel_module=software 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=32 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=32 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=1 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val=No 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:12.610 09:44:35 -- accel/accel.sh@20 -- # val= 00:16:12.610 09:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # IFS=: 00:16:12.610 09:44:35 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@20 -- # val= 00:16:14.514 09:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@20 -- # val= 00:16:14.514 09:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@20 -- # val= 00:16:14.514 09:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@20 -- # val= 00:16:14.514 09:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@20 -- # val= 00:16:14.514 09:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@20 -- # val= 00:16:14.514 09:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.514 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.514 09:44:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:14.514 09:44:37 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:16:14.514 09:44:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:14.514 00:16:14.514 real 0m2.534s 00:16:14.514 user 0m2.249s 00:16:14.514 sys 0m0.188s 00:16:14.514 09:44:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:14.514 ************************************ 00:16:14.514 09:44:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.514 END TEST accel_comp 00:16:14.514 ************************************ 00:16:14.514 09:44:37 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.514 09:44:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:14.514 09:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.514 09:44:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.772 ************************************ 00:16:14.772 START TEST accel_decomp 00:16:14.772 ************************************ 00:16:14.772 09:44:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.772 09:44:37 -- accel/accel.sh@16 -- # local accel_opc 00:16:14.772 09:44:37 -- accel/accel.sh@17 -- # local accel_module 00:16:14.772 09:44:37 -- accel/accel.sh@19 -- # IFS=: 00:16:14.772 09:44:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.772 09:44:37 -- accel/accel.sh@19 -- # read -r var val 00:16:14.772 09:44:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.772 09:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:16:14.772 09:44:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:14.772 09:44:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:14.772 09:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:14.772 09:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:14.772 09:44:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:14.773 09:44:37 -- accel/accel.sh@40 -- # local IFS=, 00:16:14.773 09:44:37 -- accel/accel.sh@41 -- # jq -r . 00:16:14.773 [2024-04-18 09:44:37.970887] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:14.773 [2024-04-18 09:44:37.971041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62803 ] 00:16:14.773 [2024-04-18 09:44:38.145110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.376 [2024-04-18 09:44:38.433661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=0x1 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=decompress 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=software 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@22 -- # accel_module=software 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=32 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=32 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=1 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val=Yes 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:15.376 09:44:38 -- accel/accel.sh@20 -- # val= 00:16:15.376 09:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.376 09:44:38 -- accel/accel.sh@19 -- # IFS=: 00:16:15.377 09:44:38 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@20 -- # val= 00:16:17.277 09:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@20 -- # val= 00:16:17.277 09:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@20 -- # val= 00:16:17.277 09:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@20 -- # val= 00:16:17.277 09:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@20 -- # val= 00:16:17.277 09:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@20 -- # val= 00:16:17.277 09:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:17.277 09:44:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:17.277 09:44:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:17.277 00:16:17.277 real 0m2.617s 00:16:17.277 user 0m2.319s 00:16:17.277 sys 0m0.201s 00:16:17.277 ************************************ 00:16:17.277 END TEST accel_decomp 00:16:17.277 ************************************ 00:16:17.277 09:44:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:17.277 09:44:40 -- common/autotest_common.sh@10 -- # set +x 00:16:17.277 09:44:40 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:17.277 09:44:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:17.277 09:44:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.277 09:44:40 -- common/autotest_common.sh@10 -- # set +x 00:16:17.277 ************************************ 00:16:17.277 START TEST accel_decmop_full 00:16:17.277 ************************************ 00:16:17.277 09:44:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:17.277 09:44:40 -- accel/accel.sh@16 -- # local accel_opc 00:16:17.277 09:44:40 -- accel/accel.sh@17 -- # local accel_module 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # IFS=: 00:16:17.277 09:44:40 -- accel/accel.sh@19 -- # read -r var val 00:16:17.277 09:44:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:17.277 09:44:40 -- accel/accel.sh@12 -- # build_accel_config 00:16:17.278 09:44:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:17.278 09:44:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:17.278 09:44:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:17.278 09:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:17.278 09:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:17.278 09:44:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:17.278 09:44:40 -- accel/accel.sh@40 -- # local IFS=, 00:16:17.278 09:44:40 -- accel/accel.sh@41 -- # jq -r . 00:16:17.555 [2024-04-18 09:44:40.708193] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:17.555 [2024-04-18 09:44:40.708386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62855 ] 00:16:17.555 [2024-04-18 09:44:40.883988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.813 [2024-04-18 09:44:41.150938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val=0x1 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val=decompress 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.071 09:44:41 -- accel/accel.sh@20 -- # val=software 00:16:18.071 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.071 09:44:41 -- accel/accel.sh@22 -- # accel_module=software 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.071 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val=32 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val=32 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val=1 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val=Yes 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:18.072 09:44:41 -- accel/accel.sh@20 -- # val= 00:16:18.072 09:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # IFS=: 00:16:18.072 09:44:41 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@20 -- # val= 00:16:19.975 09:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@20 -- # val= 00:16:19.975 09:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@20 -- # val= 00:16:19.975 09:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@20 -- # val= 00:16:19.975 09:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@20 -- # val= 00:16:19.975 09:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@20 -- # val= 00:16:19.975 09:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:19.975 09:44:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:19.975 09:44:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:19.975 00:16:19.975 real 0m2.585s 00:16:19.975 user 0m2.276s 00:16:19.975 sys 0m0.213s 00:16:19.975 09:44:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.975 09:44:43 -- common/autotest_common.sh@10 -- # set +x 00:16:19.975 ************************************ 00:16:19.975 END TEST accel_decmop_full 00:16:19.975 ************************************ 00:16:19.975 09:44:43 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:19.975 09:44:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:19.975 09:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.975 09:44:43 -- common/autotest_common.sh@10 -- # set +x 00:16:19.975 ************************************ 00:16:19.975 START TEST accel_decomp_mcore 00:16:19.975 ************************************ 00:16:19.975 09:44:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:19.975 09:44:43 -- accel/accel.sh@16 -- # local accel_opc 00:16:19.975 09:44:43 -- accel/accel.sh@17 -- # local accel_module 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # IFS=: 00:16:19.975 09:44:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:19.975 09:44:43 -- accel/accel.sh@19 -- # read -r var val 00:16:19.975 09:44:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:19.975 09:44:43 -- accel/accel.sh@12 -- # build_accel_config 00:16:19.975 09:44:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:19.975 09:44:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:19.975 09:44:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:19.975 09:44:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:19.975 09:44:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:19.975 09:44:43 -- accel/accel.sh@40 -- # local IFS=, 00:16:19.975 09:44:43 -- accel/accel.sh@41 -- # jq -r . 00:16:20.234 [2024-04-18 09:44:43.395919] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:20.234 [2024-04-18 09:44:43.396066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62911 ] 00:16:20.234 [2024-04-18 09:44:43.558572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.493 [2024-04-18 09:44:43.801277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.493 [2024-04-18 09:44:43.801444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.493 [2024-04-18 09:44:43.802335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.493 [2024-04-18 09:44:43.802349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=0xf 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=decompress 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=software 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@22 -- # accel_module=software 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=32 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=32 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=1 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val=Yes 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.753 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.753 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.753 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.754 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.754 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:20.754 09:44:44 -- accel/accel.sh@20 -- # val= 00:16:20.754 09:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.754 09:44:44 -- accel/accel.sh@19 -- # IFS=: 00:16:20.754 09:44:44 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@20 -- # val= 00:16:22.658 09:44:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:45 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:22.658 09:44:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:22.658 09:44:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:22.658 00:16:22.658 real 0m2.567s 00:16:22.658 user 0m0.021s 00:16:22.658 sys 0m0.002s 00:16:22.658 ************************************ 00:16:22.658 END TEST accel_decomp_mcore 00:16:22.658 09:44:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.658 09:44:45 -- common/autotest_common.sh@10 -- # set +x 00:16:22.658 ************************************ 00:16:22.658 09:44:45 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:22.658 09:44:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:22.658 09:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.658 09:44:45 -- common/autotest_common.sh@10 -- # set +x 00:16:22.658 ************************************ 00:16:22.658 START TEST accel_decomp_full_mcore 00:16:22.658 ************************************ 00:16:22.658 09:44:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:22.658 09:44:46 -- accel/accel.sh@16 -- # local accel_opc 00:16:22.658 09:44:46 -- accel/accel.sh@17 -- # local accel_module 00:16:22.658 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:22.658 09:44:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:22.658 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:22.658 09:44:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:22.658 09:44:46 -- accel/accel.sh@12 -- # build_accel_config 00:16:22.658 09:44:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:22.658 09:44:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:22.659 09:44:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:22.659 09:44:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:22.659 09:44:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:22.659 09:44:46 -- accel/accel.sh@40 -- # local IFS=, 00:16:22.659 09:44:46 -- accel/accel.sh@41 -- # jq -r . 00:16:22.917 [2024-04-18 09:44:46.090416] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:22.917 [2024-04-18 09:44:46.090657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62959 ] 00:16:22.917 [2024-04-18 09:44:46.275199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.176 [2024-04-18 09:44:46.562423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.176 [2024-04-18 09:44:46.562614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.176 [2024-04-18 09:44:46.563238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.176 [2024-04-18 09:44:46.563255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=0xf 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=decompress 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=software 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@22 -- # accel_module=software 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=32 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=32 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=1 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val=Yes 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:23.436 09:44:46 -- accel/accel.sh@20 -- # val= 00:16:23.436 09:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # IFS=: 00:16:23.436 09:44:46 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@20 -- # val= 00:16:25.341 09:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.341 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.341 09:44:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:25.341 09:44:48 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:25.341 09:44:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:25.341 00:16:25.341 real 0m2.692s 00:16:25.341 user 0m7.625s 00:16:25.341 sys 0m0.252s 00:16:25.341 ************************************ 00:16:25.341 END TEST accel_decomp_full_mcore 00:16:25.341 ************************************ 00:16:25.341 09:44:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.341 09:44:48 -- common/autotest_common.sh@10 -- # set +x 00:16:25.600 09:44:48 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:25.600 09:44:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:25.600 09:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.600 09:44:48 -- common/autotest_common.sh@10 -- # set +x 00:16:25.600 ************************************ 00:16:25.600 START TEST accel_decomp_mthread 00:16:25.600 ************************************ 00:16:25.600 09:44:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:25.600 09:44:48 -- accel/accel.sh@16 -- # local accel_opc 00:16:25.600 09:44:48 -- accel/accel.sh@17 -- # local accel_module 00:16:25.600 09:44:48 -- accel/accel.sh@19 -- # IFS=: 00:16:25.600 09:44:48 -- accel/accel.sh@19 -- # read -r var val 00:16:25.600 09:44:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:25.600 09:44:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:25.600 09:44:48 -- accel/accel.sh@12 -- # build_accel_config 00:16:25.600 09:44:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:25.600 09:44:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:25.600 09:44:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:25.600 09:44:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:25.600 09:44:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:25.600 09:44:48 -- accel/accel.sh@40 -- # local IFS=, 00:16:25.600 09:44:48 -- accel/accel.sh@41 -- # jq -r . 00:16:25.600 [2024-04-18 09:44:48.903302] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:25.600 [2024-04-18 09:44:48.903518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63018 ] 00:16:25.859 [2024-04-18 09:44:49.076711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.118 [2024-04-18 09:44:49.320548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=0x1 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=decompress 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=software 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@22 -- # accel_module=software 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=32 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=32 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=2 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val=Yes 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:26.378 09:44:49 -- accel/accel.sh@20 -- # val= 00:16:26.378 09:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # IFS=: 00:16:26.378 09:44:49 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@20 -- # val= 00:16:28.284 09:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 ************************************ 00:16:28.284 END TEST accel_decomp_mthread 00:16:28.284 ************************************ 00:16:28.284 09:44:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:28.284 09:44:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:28.284 09:44:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.284 00:16:28.284 real 0m2.590s 00:16:28.284 user 0m2.293s 00:16:28.284 sys 0m0.200s 00:16:28.284 09:44:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.284 09:44:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.284 09:44:51 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:28.284 09:44:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:28.284 09:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.284 09:44:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.284 ************************************ 00:16:28.284 START TEST accel_deomp_full_mthread 00:16:28.284 ************************************ 00:16:28.284 09:44:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:28.284 09:44:51 -- accel/accel.sh@16 -- # local accel_opc 00:16:28.284 09:44:51 -- accel/accel.sh@17 -- # local accel_module 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # IFS=: 00:16:28.284 09:44:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:28.284 09:44:51 -- accel/accel.sh@19 -- # read -r var val 00:16:28.284 09:44:51 -- accel/accel.sh@12 -- # build_accel_config 00:16:28.284 09:44:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:28.284 09:44:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:28.284 09:44:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:28.284 09:44:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:28.284 09:44:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:28.284 09:44:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:28.284 09:44:51 -- accel/accel.sh@40 -- # local IFS=, 00:16:28.284 09:44:51 -- accel/accel.sh@41 -- # jq -r . 00:16:28.284 [2024-04-18 09:44:51.607577] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:28.284 [2024-04-18 09:44:51.607737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63069 ] 00:16:28.544 [2024-04-18 09:44:51.781313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.802 [2024-04-18 09:44:52.021201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.062 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=0x1 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=decompress 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=software 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@22 -- # accel_module=software 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=32 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=32 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=2 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val=Yes 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:29.063 09:44:52 -- accel/accel.sh@20 -- # val= 00:16:29.063 09:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # IFS=: 00:16:29.063 09:44:52 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 09:44:54 -- accel/accel.sh@20 -- # val= 00:16:30.969 09:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # IFS=: 00:16:30.969 09:44:54 -- accel/accel.sh@19 -- # read -r var val 00:16:30.969 ************************************ 00:16:30.969 END TEST accel_deomp_full_mthread 00:16:30.969 ************************************ 00:16:30.969 09:44:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:30.969 09:44:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:30.969 09:44:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.969 00:16:30.969 real 0m2.605s 00:16:30.969 user 0m2.307s 00:16:30.969 sys 0m0.200s 00:16:30.969 09:44:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:30.969 09:44:54 -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 09:44:54 -- accel/accel.sh@124 -- # [[ n == y ]] 00:16:30.969 09:44:54 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:30.969 09:44:54 -- accel/accel.sh@137 -- # build_accel_config 00:16:30.969 09:44:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:30.969 09:44:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:30.969 09:44:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:30.969 09:44:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:30.969 09:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:30.969 09:44:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:30.969 09:44:54 -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 09:44:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:30.969 09:44:54 -- accel/accel.sh@40 -- # local IFS=, 00:16:30.969 09:44:54 -- accel/accel.sh@41 -- # jq -r . 00:16:30.969 ************************************ 00:16:30.969 START TEST accel_dif_functional_tests 00:16:30.969 ************************************ 00:16:30.969 09:44:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:31.229 [2024-04-18 09:44:54.382555] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:31.229 [2024-04-18 09:44:54.382732] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63121 ] 00:16:31.229 [2024-04-18 09:44:54.560053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.488 [2024-04-18 09:44:54.801795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.488 [2024-04-18 09:44:54.801942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.488 [2024-04-18 09:44:54.801953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.748 00:16:31.748 00:16:31.748 CUnit - A unit testing framework for C - Version 2.1-3 00:16:31.748 http://cunit.sourceforge.net/ 00:16:31.748 00:16:31.748 00:16:31.748 Suite: accel_dif 00:16:31.748 Test: verify: DIF generated, GUARD check ...passed 00:16:31.748 Test: verify: DIF generated, APPTAG check ...passed 00:16:31.748 Test: verify: DIF generated, REFTAG check ...passed 00:16:31.748 Test: verify: DIF not generated, GUARD check ...passed 00:16:31.748 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 09:44:55.124371] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:31.748 [2024-04-18 09:44:55.124478] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:31.748 [2024-04-18 09:44:55.124560] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:31.748 passed 00:16:31.748 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 09:44:55.124776] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:31.748 passed 00:16:31.748 Test: verify: APPTAG correct, APPTAG check ...passed 00:16:31.748 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 09:44:55.124852] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:31.748 [2024-04-18 09:44:55.124900] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:31.748 [2024-04-18 09:44:55.125008] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:16:31.748 passed 00:16:31.748 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:16:31.748 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:16:31.748 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:16:31.748 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 09:44:55.125518] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:16:31.748 passed 00:16:31.748 Test: generate copy: DIF generated, GUARD check ...passed 00:16:31.748 Test: generate copy: DIF generated, APTTAG check ...passed 00:16:31.748 Test: generate copy: DIF generated, REFTAG check ...passed 00:16:31.748 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:16:31.748 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:16:31.748 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:16:31.748 Test: generate copy: iovecs-len validate ...[2024-04-18 09:44:55.126592] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:16:31.748 passed 00:16:31.748 Test: generate copy: buffer alignment validate ...passed 00:16:31.748 00:16:31.748 Run Summary: Type Total Ran Passed Failed Inactive 00:16:31.748 suites 1 1 n/a 0 0 00:16:31.748 tests 20 20 20 0 0 00:16:31.748 asserts 204 204 204 0 n/a 00:16:31.748 00:16:31.748 Elapsed time = 0.007 seconds 00:16:33.136 ************************************ 00:16:33.136 END TEST accel_dif_functional_tests 00:16:33.136 ************************************ 00:16:33.136 00:16:33.136 real 0m1.994s 00:16:33.136 user 0m3.703s 00:16:33.136 sys 0m0.292s 00:16:33.136 09:44:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.136 09:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 ************************************ 00:16:33.136 END TEST accel 00:16:33.136 ************************************ 00:16:33.136 00:16:33.136 real 1m4.291s 00:16:33.136 user 1m7.809s 00:16:33.136 sys 0m7.170s 00:16:33.136 09:44:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.136 09:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 09:44:56 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:33.136 09:44:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:33.136 09:44:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.136 09:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 ************************************ 00:16:33.136 START TEST accel_rpc 00:16:33.136 ************************************ 00:16:33.136 09:44:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:33.136 * Looking for test storage... 00:16:33.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:33.136 09:44:56 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:33.136 09:44:56 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63216 00:16:33.136 09:44:56 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:33.136 09:44:56 -- accel/accel_rpc.sh@15 -- # waitforlisten 63216 00:16:33.136 09:44:56 -- common/autotest_common.sh@817 -- # '[' -z 63216 ']' 00:16:33.136 09:44:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.136 09:44:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.136 09:44:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.136 09:44:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.136 09:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:33.395 [2024-04-18 09:44:56.627254] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:33.395 [2024-04-18 09:44:56.627426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63216 ] 00:16:33.654 [2024-04-18 09:44:56.799053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.654 [2024-04-18 09:44:57.035891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.223 09:44:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.223 09:44:57 -- common/autotest_common.sh@850 -- # return 0 00:16:34.223 09:44:57 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:16:34.223 09:44:57 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:16:34.223 09:44:57 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:16:34.223 09:44:57 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:16:34.223 09:44:57 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:16:34.223 09:44:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:34.223 09:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.223 09:44:57 -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 ************************************ 00:16:34.482 START TEST accel_assign_opcode 00:16:34.482 ************************************ 00:16:34.482 09:44:57 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:16:34.482 09:44:57 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:16:34.482 09:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.482 09:44:57 -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 [2024-04-18 09:44:57.644958] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:16:34.482 09:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.482 09:44:57 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:16:34.482 09:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.482 09:44:57 -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 [2024-04-18 09:44:57.656887] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:16:34.482 09:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.482 09:44:57 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:16:34.482 09:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.482 09:44:57 -- common/autotest_common.sh@10 -- # set +x 00:16:35.050 09:44:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.050 09:44:58 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:16:35.050 09:44:58 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:16:35.050 09:44:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.050 09:44:58 -- accel/accel_rpc.sh@42 -- # grep software 00:16:35.050 09:44:58 -- common/autotest_common.sh@10 -- # set +x 00:16:35.050 09:44:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.308 software 00:16:35.308 ************************************ 00:16:35.308 END TEST accel_assign_opcode 00:16:35.308 ************************************ 00:16:35.308 00:16:35.308 real 0m0.818s 00:16:35.308 user 0m0.052s 00:16:35.308 sys 0m0.013s 00:16:35.308 09:44:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.308 09:44:58 -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 09:44:58 -- accel/accel_rpc.sh@55 -- # killprocess 63216 00:16:35.308 09:44:58 -- common/autotest_common.sh@936 -- # '[' -z 63216 ']' 00:16:35.308 09:44:58 -- common/autotest_common.sh@940 -- # kill -0 63216 00:16:35.308 09:44:58 -- common/autotest_common.sh@941 -- # uname 00:16:35.308 09:44:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.308 09:44:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63216 00:16:35.308 killing process with pid 63216 00:16:35.308 09:44:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.308 09:44:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.308 09:44:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63216' 00:16:35.308 09:44:58 -- common/autotest_common.sh@955 -- # kill 63216 00:16:35.308 09:44:58 -- common/autotest_common.sh@960 -- # wait 63216 00:16:37.843 00:16:37.843 real 0m4.299s 00:16:37.843 user 0m4.243s 00:16:37.843 sys 0m0.663s 00:16:37.843 09:45:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.843 09:45:00 -- common/autotest_common.sh@10 -- # set +x 00:16:37.843 ************************************ 00:16:37.843 END TEST accel_rpc 00:16:37.843 ************************************ 00:16:37.843 09:45:00 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:37.843 09:45:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:37.843 09:45:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.843 09:45:00 -- common/autotest_common.sh@10 -- # set +x 00:16:37.843 ************************************ 00:16:37.843 START TEST app_cmdline 00:16:37.843 ************************************ 00:16:37.843 09:45:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:37.843 * Looking for test storage... 00:16:37.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:37.843 09:45:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:37.843 09:45:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=63341 00:16:37.843 09:45:00 -- app/cmdline.sh@18 -- # waitforlisten 63341 00:16:37.843 09:45:00 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:37.843 09:45:00 -- common/autotest_common.sh@817 -- # '[' -z 63341 ']' 00:16:37.843 09:45:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.843 09:45:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.843 09:45:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.843 09:45:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.843 09:45:00 -- common/autotest_common.sh@10 -- # set +x 00:16:37.843 [2024-04-18 09:45:01.071871] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:37.843 [2024-04-18 09:45:01.072058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63341 ] 00:16:38.102 [2024-04-18 09:45:01.245117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.360 [2024-04-18 09:45:01.501325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.929 09:45:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.929 09:45:02 -- common/autotest_common.sh@850 -- # return 0 00:16:38.929 09:45:02 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:16:39.189 { 00:16:39.189 "version": "SPDK v24.05-pre git sha1 65b4e17c6", 00:16:39.189 "fields": { 00:16:39.189 "major": 24, 00:16:39.189 "minor": 5, 00:16:39.189 "patch": 0, 00:16:39.189 "suffix": "-pre", 00:16:39.189 "commit": "65b4e17c6" 00:16:39.189 } 00:16:39.189 } 00:16:39.189 09:45:02 -- app/cmdline.sh@22 -- # expected_methods=() 00:16:39.189 09:45:02 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:16:39.189 09:45:02 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:16:39.189 09:45:02 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:16:39.189 09:45:02 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:16:39.189 09:45:02 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:16:39.189 09:45:02 -- app/cmdline.sh@26 -- # sort 00:16:39.189 09:45:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.189 09:45:02 -- common/autotest_common.sh@10 -- # set +x 00:16:39.189 09:45:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.189 09:45:02 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:16:39.189 09:45:02 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:16:39.189 09:45:02 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:39.189 09:45:02 -- common/autotest_common.sh@638 -- # local es=0 00:16:39.189 09:45:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:39.189 09:45:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.189 09:45:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:39.189 09:45:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.189 09:45:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:39.189 09:45:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.189 09:45:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:39.189 09:45:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.189 09:45:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:39.189 09:45:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:39.448 request: 00:16:39.448 { 00:16:39.448 "method": "env_dpdk_get_mem_stats", 00:16:39.448 "req_id": 1 00:16:39.448 } 00:16:39.448 Got JSON-RPC error response 00:16:39.448 response: 00:16:39.448 { 00:16:39.448 "code": -32601, 00:16:39.448 "message": "Method not found" 00:16:39.448 } 00:16:39.707 09:45:02 -- common/autotest_common.sh@641 -- # es=1 00:16:39.707 09:45:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:39.707 09:45:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:39.707 09:45:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:39.707 09:45:02 -- app/cmdline.sh@1 -- # killprocess 63341 00:16:39.707 09:45:02 -- common/autotest_common.sh@936 -- # '[' -z 63341 ']' 00:16:39.707 09:45:02 -- common/autotest_common.sh@940 -- # kill -0 63341 00:16:39.707 09:45:02 -- common/autotest_common.sh@941 -- # uname 00:16:39.707 09:45:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.707 09:45:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63341 00:16:39.707 killing process with pid 63341 00:16:39.707 09:45:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:39.707 09:45:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:39.707 09:45:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63341' 00:16:39.707 09:45:02 -- common/autotest_common.sh@955 -- # kill 63341 00:16:39.707 09:45:02 -- common/autotest_common.sh@960 -- # wait 63341 00:16:42.239 ************************************ 00:16:42.239 END TEST app_cmdline 00:16:42.239 ************************************ 00:16:42.239 00:16:42.239 real 0m4.246s 00:16:42.239 user 0m4.578s 00:16:42.239 sys 0m0.656s 00:16:42.239 09:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:42.239 09:45:05 -- common/autotest_common.sh@10 -- # set +x 00:16:42.239 09:45:05 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:42.239 09:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:42.239 09:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.239 09:45:05 -- common/autotest_common.sh@10 -- # set +x 00:16:42.239 ************************************ 00:16:42.239 START TEST version 00:16:42.239 ************************************ 00:16:42.239 09:45:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:42.239 * Looking for test storage... 00:16:42.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:42.239 09:45:05 -- app/version.sh@17 -- # get_header_version major 00:16:42.239 09:45:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:42.239 09:45:05 -- app/version.sh@14 -- # tr -d '"' 00:16:42.239 09:45:05 -- app/version.sh@14 -- # cut -f2 00:16:42.239 09:45:05 -- app/version.sh@17 -- # major=24 00:16:42.239 09:45:05 -- app/version.sh@18 -- # get_header_version minor 00:16:42.239 09:45:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:42.239 09:45:05 -- app/version.sh@14 -- # cut -f2 00:16:42.239 09:45:05 -- app/version.sh@14 -- # tr -d '"' 00:16:42.239 09:45:05 -- app/version.sh@18 -- # minor=5 00:16:42.239 09:45:05 -- app/version.sh@19 -- # get_header_version patch 00:16:42.239 09:45:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:42.239 09:45:05 -- app/version.sh@14 -- # cut -f2 00:16:42.239 09:45:05 -- app/version.sh@14 -- # tr -d '"' 00:16:42.239 09:45:05 -- app/version.sh@19 -- # patch=0 00:16:42.239 09:45:05 -- app/version.sh@20 -- # get_header_version suffix 00:16:42.239 09:45:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:42.239 09:45:05 -- app/version.sh@14 -- # cut -f2 00:16:42.239 09:45:05 -- app/version.sh@14 -- # tr -d '"' 00:16:42.239 09:45:05 -- app/version.sh@20 -- # suffix=-pre 00:16:42.239 09:45:05 -- app/version.sh@22 -- # version=24.5 00:16:42.239 09:45:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:16:42.239 09:45:05 -- app/version.sh@28 -- # version=24.5rc0 00:16:42.239 09:45:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:42.239 09:45:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:16:42.239 09:45:05 -- app/version.sh@30 -- # py_version=24.5rc0 00:16:42.239 09:45:05 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:16:42.239 00:16:42.239 real 0m0.165s 00:16:42.239 user 0m0.096s 00:16:42.239 sys 0m0.098s 00:16:42.239 09:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:42.239 ************************************ 00:16:42.239 END TEST version 00:16:42.239 ************************************ 00:16:42.239 09:45:05 -- common/autotest_common.sh@10 -- # set +x 00:16:42.239 09:45:05 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:16:42.239 09:45:05 -- spdk/autotest.sh@194 -- # uname -s 00:16:42.239 09:45:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:42.239 09:45:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:42.239 09:45:05 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:16:42.239 09:45:05 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:16:42.239 09:45:05 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:16:42.239 09:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:42.239 09:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.239 09:45:05 -- common/autotest_common.sh@10 -- # set +x 00:16:42.239 ************************************ 00:16:42.239 START TEST spdk_dd 00:16:42.239 ************************************ 00:16:42.239 09:45:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:16:42.239 * Looking for test storage... 00:16:42.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:42.239 09:45:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.239 09:45:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.239 09:45:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.239 09:45:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.239 09:45:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.240 09:45:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.240 09:45:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.240 09:45:05 -- paths/export.sh@5 -- # export PATH 00:16:42.240 09:45:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.240 09:45:05 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:42.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.809 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:42.809 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:42.809 09:45:05 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:16:42.809 09:45:05 -- dd/dd.sh@11 -- # nvme_in_userspace 00:16:42.809 09:45:05 -- scripts/common.sh@309 -- # local bdf bdfs 00:16:42.809 09:45:05 -- scripts/common.sh@310 -- # local nvmes 00:16:42.809 09:45:05 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:16:42.809 09:45:05 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:42.809 09:45:05 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:16:42.809 09:45:05 -- scripts/common.sh@295 -- # local bdf= 00:16:42.809 09:45:05 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:16:42.809 09:45:05 -- scripts/common.sh@230 -- # local class 00:16:42.809 09:45:05 -- scripts/common.sh@231 -- # local subclass 00:16:42.809 09:45:05 -- scripts/common.sh@232 -- # local progif 00:16:42.809 09:45:05 -- scripts/common.sh@233 -- # printf %02x 1 00:16:42.809 09:45:05 -- scripts/common.sh@233 -- # class=01 00:16:42.809 09:45:05 -- scripts/common.sh@234 -- # printf %02x 8 00:16:42.809 09:45:05 -- scripts/common.sh@234 -- # subclass=08 00:16:42.809 09:45:06 -- scripts/common.sh@235 -- # printf %02x 2 00:16:42.809 09:45:06 -- scripts/common.sh@235 -- # progif=02 00:16:42.809 09:45:06 -- scripts/common.sh@237 -- # hash lspci 00:16:42.809 09:45:06 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:16:42.809 09:45:06 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:16:42.809 09:45:06 -- scripts/common.sh@240 -- # grep -i -- -p02 00:16:42.809 09:45:06 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:42.809 09:45:06 -- scripts/common.sh@242 -- # tr -d '"' 00:16:42.809 09:45:06 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:42.809 09:45:06 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:16:42.809 09:45:06 -- scripts/common.sh@15 -- # local i 00:16:42.809 09:45:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:16:42.809 09:45:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:16:42.809 09:45:06 -- scripts/common.sh@24 -- # return 0 00:16:42.809 09:45:06 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:16:42.809 09:45:06 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:42.809 09:45:06 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:16:42.809 09:45:06 -- scripts/common.sh@15 -- # local i 00:16:42.809 09:45:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:16:42.809 09:45:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:16:42.809 09:45:06 -- scripts/common.sh@24 -- # return 0 00:16:42.809 09:45:06 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:16:42.809 09:45:06 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:16:42.809 09:45:06 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:42.809 09:45:06 -- scripts/common.sh@320 -- # uname -s 00:16:42.809 09:45:06 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:16:42.809 09:45:06 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:16:42.809 09:45:06 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:16:42.809 09:45:06 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:16:42.809 09:45:06 -- scripts/common.sh@320 -- # uname -s 00:16:42.809 09:45:06 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:16:42.809 09:45:06 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:16:42.809 09:45:06 -- scripts/common.sh@325 -- # (( 2 )) 00:16:42.809 09:45:06 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:42.809 09:45:06 -- dd/dd.sh@13 -- # check_liburing 00:16:42.809 09:45:06 -- dd/common.sh@139 -- # local lib so 00:16:42.809 09:45:06 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:16:42.809 09:45:06 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:16:42.809 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.809 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:16:42.810 09:45:06 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:16:42.810 09:45:06 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:16:42.810 * spdk_dd linked to liburing 00:16:42.810 09:45:06 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:42.810 09:45:06 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:42.810 09:45:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:42.810 09:45:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:42.810 09:45:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:42.810 09:45:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:42.810 09:45:06 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:16:42.810 09:45:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:42.810 09:45:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:42.810 09:45:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:42.810 09:45:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:42.810 09:45:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:42.810 09:45:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:42.810 09:45:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:42.810 09:45:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:42.810 09:45:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:42.810 09:45:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:42.810 09:45:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:42.810 09:45:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:42.810 09:45:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:42.810 09:45:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:42.810 09:45:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:42.810 09:45:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:42.810 09:45:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:42.810 09:45:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:42.810 09:45:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:42.810 09:45:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:42.810 09:45:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:42.810 09:45:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:42.810 09:45:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:42.810 09:45:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:42.810 09:45:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:42.810 09:45:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:42.811 09:45:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:42.811 09:45:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:42.811 09:45:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:42.811 09:45:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:42.811 09:45:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:42.811 09:45:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:42.811 09:45:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:42.811 09:45:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:42.811 09:45:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:42.811 09:45:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:42.811 09:45:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:42.811 09:45:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:42.811 09:45:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:42.811 09:45:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:42.811 09:45:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:16:42.811 09:45:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:16:42.811 09:45:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:42.811 09:45:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:16:42.811 09:45:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:16:42.811 09:45:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:16:42.811 09:45:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:16:42.811 09:45:06 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:16:42.811 09:45:06 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:16:42.811 09:45:06 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:16:42.811 09:45:06 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:16:42.811 09:45:06 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:16:42.811 09:45:06 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:16:42.811 09:45:06 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:16:42.811 09:45:06 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:16:42.811 09:45:06 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:16:42.811 09:45:06 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:16:42.811 09:45:06 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:16:42.811 09:45:06 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:16:42.811 09:45:06 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:16:42.811 09:45:06 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:16:42.811 09:45:06 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:16:42.811 09:45:06 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:42.811 09:45:06 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:16:42.811 09:45:06 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:16:42.811 09:45:06 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:16:42.811 09:45:06 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:16:42.811 09:45:06 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:16:42.811 09:45:06 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:16:42.811 09:45:06 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:16:42.811 09:45:06 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:16:42.811 09:45:06 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:16:42.811 09:45:06 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:16:42.811 09:45:06 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:16:42.811 09:45:06 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:42.811 09:45:06 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:16:42.811 09:45:06 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:16:42.811 09:45:06 -- dd/common.sh@149 -- # [[ y != y ]] 00:16:42.811 09:45:06 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:16:42.811 09:45:06 -- dd/common.sh@156 -- # export liburing_in_use=1 00:16:42.811 09:45:06 -- dd/common.sh@156 -- # liburing_in_use=1 00:16:42.811 09:45:06 -- dd/common.sh@157 -- # return 0 00:16:42.811 09:45:06 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:16:42.811 09:45:06 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:16:42.811 09:45:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:42.811 09:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.811 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:16:42.811 ************************************ 00:16:42.811 START TEST spdk_dd_basic_rw 00:16:42.811 ************************************ 00:16:42.811 09:45:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:16:43.069 * Looking for test storage... 00:16:43.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:43.069 09:45:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.069 09:45:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.069 09:45:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.069 09:45:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.070 09:45:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.070 09:45:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.070 09:45:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.070 09:45:06 -- paths/export.sh@5 -- # export PATH 00:16:43.070 09:45:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.070 09:45:06 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:16:43.070 09:45:06 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:16:43.070 09:45:06 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:16:43.070 09:45:06 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:16:43.070 09:45:06 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:16:43.070 09:45:06 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:16:43.070 09:45:06 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:16:43.070 09:45:06 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:43.070 09:45:06 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:43.070 09:45:06 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:16:43.070 09:45:06 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:16:43.070 09:45:06 -- dd/common.sh@126 -- # mapfile -t id 00:16:43.070 09:45:06 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:16:43.330 09:45:06 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:16:43.330 09:45:06 -- dd/common.sh@130 -- # lbaf=04 00:16:43.331 09:45:06 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:16:43.331 09:45:06 -- dd/common.sh@132 -- # lbaf=4096 00:16:43.331 09:45:06 -- dd/common.sh@134 -- # echo 4096 00:16:43.331 09:45:06 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:16:43.331 09:45:06 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:43.331 09:45:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:43.331 09:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.331 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:16:43.331 09:45:06 -- dd/basic_rw.sh@96 -- # gen_conf 00:16:43.331 09:45:06 -- dd/basic_rw.sh@96 -- # : 00:16:43.331 09:45:06 -- dd/common.sh@31 -- # xtrace_disable 00:16:43.331 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:16:43.331 { 00:16:43.331 "subsystems": [ 00:16:43.331 { 00:16:43.331 "subsystem": "bdev", 00:16:43.331 "config": [ 00:16:43.331 { 00:16:43.331 "params": { 00:16:43.331 "trtype": "pcie", 00:16:43.331 "traddr": "0000:00:10.0", 00:16:43.331 "name": "Nvme0" 00:16:43.331 }, 00:16:43.331 "method": "bdev_nvme_attach_controller" 00:16:43.331 }, 00:16:43.331 { 00:16:43.331 "method": "bdev_wait_for_examine" 00:16:43.331 } 00:16:43.331 ] 00:16:43.331 } 00:16:43.331 ] 00:16:43.331 } 00:16:43.331 ************************************ 00:16:43.331 START TEST dd_bs_lt_native_bs 00:16:43.331 ************************************ 00:16:43.331 09:45:06 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:43.331 09:45:06 -- common/autotest_common.sh@638 -- # local es=0 00:16:43.331 09:45:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:43.331 09:45:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:43.331 09:45:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:43.331 09:45:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:43.331 09:45:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:43.331 09:45:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:43.331 09:45:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:43.331 09:45:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:43.331 09:45:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:43.331 09:45:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:43.589 [2024-04-18 09:45:06.741240] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:43.589 [2024-04-18 09:45:06.741410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:16:43.589 [2024-04-18 09:45:06.916748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.846 [2024-04-18 09:45:07.200513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.413 [2024-04-18 09:45:07.597751] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:16:44.413 [2024-04-18 09:45:07.598010] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:44.981 [2024-04-18 09:45:08.145225] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:45.239 09:45:08 -- common/autotest_common.sh@641 -- # es=234 00:16:45.239 09:45:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:45.239 09:45:08 -- common/autotest_common.sh@650 -- # es=106 00:16:45.239 09:45:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:45.239 09:45:08 -- common/autotest_common.sh@658 -- # es=1 00:16:45.239 ************************************ 00:16:45.239 END TEST dd_bs_lt_native_bs 00:16:45.239 ************************************ 00:16:45.239 09:45:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:45.239 00:16:45.239 real 0m1.960s 00:16:45.239 user 0m1.571s 00:16:45.239 sys 0m0.276s 00:16:45.239 09:45:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:45.239 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:16:45.498 09:45:08 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:16:45.498 09:45:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:45.498 09:45:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.498 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:16:45.498 ************************************ 00:16:45.498 START TEST dd_rw 00:16:45.498 ************************************ 00:16:45.498 09:45:08 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:16:45.498 09:45:08 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:16:45.498 09:45:08 -- dd/basic_rw.sh@12 -- # local count size 00:16:45.498 09:45:08 -- dd/basic_rw.sh@13 -- # local qds bss 00:16:45.498 09:45:08 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:16:45.498 09:45:08 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:16:45.498 09:45:08 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:16:45.498 09:45:08 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:16:45.498 09:45:08 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:16:45.498 09:45:08 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:16:45.498 09:45:08 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:16:45.498 09:45:08 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:16:45.498 09:45:08 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:16:45.498 09:45:08 -- dd/basic_rw.sh@23 -- # count=15 00:16:45.498 09:45:08 -- dd/basic_rw.sh@24 -- # count=15 00:16:45.498 09:45:08 -- dd/basic_rw.sh@25 -- # size=61440 00:16:45.498 09:45:08 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:16:45.498 09:45:08 -- dd/common.sh@98 -- # xtrace_disable 00:16:45.498 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:16:46.068 09:45:09 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:16:46.068 09:45:09 -- dd/basic_rw.sh@30 -- # gen_conf 00:16:46.068 09:45:09 -- dd/common.sh@31 -- # xtrace_disable 00:16:46.068 09:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:46.325 { 00:16:46.325 "subsystems": [ 00:16:46.325 { 00:16:46.325 "subsystem": "bdev", 00:16:46.325 "config": [ 00:16:46.325 { 00:16:46.325 "params": { 00:16:46.325 "trtype": "pcie", 00:16:46.325 "traddr": "0000:00:10.0", 00:16:46.325 "name": "Nvme0" 00:16:46.325 }, 00:16:46.325 "method": "bdev_nvme_attach_controller" 00:16:46.325 }, 00:16:46.325 { 00:16:46.325 "method": "bdev_wait_for_examine" 00:16:46.325 } 00:16:46.325 ] 00:16:46.325 } 00:16:46.325 ] 00:16:46.325 } 00:16:46.325 [2024-04-18 09:45:09.505536] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:46.325 [2024-04-18 09:45:09.505918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63762 ] 00:16:46.325 [2024-04-18 09:45:09.672002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.583 [2024-04-18 09:45:09.938152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.106  Copying: 60/60 [kB] (average 19 MBps) 00:16:48.106 00:16:48.106 09:45:11 -- dd/basic_rw.sh@37 -- # gen_conf 00:16:48.106 09:45:11 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:16:48.106 09:45:11 -- dd/common.sh@31 -- # xtrace_disable 00:16:48.106 09:45:11 -- common/autotest_common.sh@10 -- # set +x 00:16:48.365 { 00:16:48.365 "subsystems": [ 00:16:48.365 { 00:16:48.365 "subsystem": "bdev", 00:16:48.365 "config": [ 00:16:48.365 { 00:16:48.365 "params": { 00:16:48.365 "trtype": "pcie", 00:16:48.365 "traddr": "0000:00:10.0", 00:16:48.365 "name": "Nvme0" 00:16:48.365 }, 00:16:48.365 "method": "bdev_nvme_attach_controller" 00:16:48.365 }, 00:16:48.365 { 00:16:48.365 "method": "bdev_wait_for_examine" 00:16:48.365 } 00:16:48.365 ] 00:16:48.365 } 00:16:48.365 ] 00:16:48.365 } 00:16:48.365 [2024-04-18 09:45:11.583374] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:48.365 [2024-04-18 09:45:11.583533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:16:48.365 [2024-04-18 09:45:11.749221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.623 [2024-04-18 09:45:11.988283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.123  Copying: 60/60 [kB] (average 14 MBps) 00:16:50.123 00:16:50.123 09:45:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:50.123 09:45:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:16:50.123 09:45:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:16:50.123 09:45:13 -- dd/common.sh@11 -- # local nvme_ref= 00:16:50.123 09:45:13 -- dd/common.sh@12 -- # local size=61440 00:16:50.123 09:45:13 -- dd/common.sh@14 -- # local bs=1048576 00:16:50.123 09:45:13 -- dd/common.sh@15 -- # local count=1 00:16:50.123 09:45:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:16:50.123 09:45:13 -- dd/common.sh@18 -- # gen_conf 00:16:50.123 09:45:13 -- dd/common.sh@31 -- # xtrace_disable 00:16:50.123 09:45:13 -- common/autotest_common.sh@10 -- # set +x 00:16:50.123 { 00:16:50.123 "subsystems": [ 00:16:50.123 { 00:16:50.123 "subsystem": "bdev", 00:16:50.123 "config": [ 00:16:50.123 { 00:16:50.123 "params": { 00:16:50.123 "trtype": "pcie", 00:16:50.123 "traddr": "0000:00:10.0", 00:16:50.123 "name": "Nvme0" 00:16:50.123 }, 00:16:50.123 "method": "bdev_nvme_attach_controller" 00:16:50.123 }, 00:16:50.123 { 00:16:50.123 "method": "bdev_wait_for_examine" 00:16:50.123 } 00:16:50.123 ] 00:16:50.123 } 00:16:50.123 ] 00:16:50.123 } 00:16:50.123 [2024-04-18 09:45:13.448807] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:50.123 [2024-04-18 09:45:13.450070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63827 ] 00:16:50.382 [2024-04-18 09:45:13.618956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.640 [2024-04-18 09:45:13.866280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.275  Copying: 1024/1024 [kB] (average 1000 MBps) 00:16:52.275 00:16:52.275 09:45:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:16:52.275 09:45:15 -- dd/basic_rw.sh@23 -- # count=15 00:16:52.275 09:45:15 -- dd/basic_rw.sh@24 -- # count=15 00:16:52.275 09:45:15 -- dd/basic_rw.sh@25 -- # size=61440 00:16:52.275 09:45:15 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:16:52.275 09:45:15 -- dd/common.sh@98 -- # xtrace_disable 00:16:52.275 09:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:52.843 09:45:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:16:52.843 09:45:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:16:52.843 09:45:16 -- dd/common.sh@31 -- # xtrace_disable 00:16:52.843 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:16:52.843 { 00:16:52.843 "subsystems": [ 00:16:52.843 { 00:16:52.843 "subsystem": "bdev", 00:16:52.843 "config": [ 00:16:52.843 { 00:16:52.843 "params": { 00:16:52.843 "trtype": "pcie", 00:16:52.843 "traddr": "0000:00:10.0", 00:16:52.843 "name": "Nvme0" 00:16:52.843 }, 00:16:52.843 "method": "bdev_nvme_attach_controller" 00:16:52.843 }, 00:16:52.843 { 00:16:52.843 "method": "bdev_wait_for_examine" 00:16:52.843 } 00:16:52.843 ] 00:16:52.843 } 00:16:52.843 ] 00:16:52.843 } 00:16:53.101 [2024-04-18 09:45:16.245594] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:53.101 [2024-04-18 09:45:16.245857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63864 ] 00:16:53.101 [2024-04-18 09:45:16.434396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.360 [2024-04-18 09:45:16.696498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.861  Copying: 60/60 [kB] (average 58 MBps) 00:16:54.861 00:16:54.861 09:45:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:16:54.862 09:45:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:16:54.862 09:45:18 -- dd/common.sh@31 -- # xtrace_disable 00:16:54.862 09:45:18 -- common/autotest_common.sh@10 -- # set +x 00:16:54.862 { 00:16:54.862 "subsystems": [ 00:16:54.862 { 00:16:54.862 "subsystem": "bdev", 00:16:54.862 "config": [ 00:16:54.862 { 00:16:54.862 "params": { 00:16:54.862 "trtype": "pcie", 00:16:54.862 "traddr": "0000:00:10.0", 00:16:54.862 "name": "Nvme0" 00:16:54.862 }, 00:16:54.862 "method": "bdev_nvme_attach_controller" 00:16:54.862 }, 00:16:54.862 { 00:16:54.862 "method": "bdev_wait_for_examine" 00:16:54.862 } 00:16:54.862 ] 00:16:54.862 } 00:16:54.862 ] 00:16:54.862 } 00:16:54.862 [2024-04-18 09:45:18.156975] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:54.862 [2024-04-18 09:45:18.157200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63895 ] 00:16:55.120 [2024-04-18 09:45:18.337106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.380 [2024-04-18 09:45:18.594844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.036  Copying: 60/60 [kB] (average 58 MBps) 00:16:57.036 00:16:57.036 09:45:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:57.037 09:45:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:16:57.037 09:45:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:16:57.037 09:45:20 -- dd/common.sh@11 -- # local nvme_ref= 00:16:57.037 09:45:20 -- dd/common.sh@12 -- # local size=61440 00:16:57.037 09:45:20 -- dd/common.sh@14 -- # local bs=1048576 00:16:57.037 09:45:20 -- dd/common.sh@15 -- # local count=1 00:16:57.037 09:45:20 -- dd/common.sh@18 -- # gen_conf 00:16:57.037 09:45:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:16:57.037 09:45:20 -- dd/common.sh@31 -- # xtrace_disable 00:16:57.037 09:45:20 -- common/autotest_common.sh@10 -- # set +x 00:16:57.037 { 00:16:57.037 "subsystems": [ 00:16:57.037 { 00:16:57.037 "subsystem": "bdev", 00:16:57.037 "config": [ 00:16:57.037 { 00:16:57.037 "params": { 00:16:57.037 "trtype": "pcie", 00:16:57.037 "traddr": "0000:00:10.0", 00:16:57.037 "name": "Nvme0" 00:16:57.037 }, 00:16:57.037 "method": "bdev_nvme_attach_controller" 00:16:57.037 }, 00:16:57.037 { 00:16:57.037 "method": "bdev_wait_for_examine" 00:16:57.037 } 00:16:57.037 ] 00:16:57.037 } 00:16:57.037 ] 00:16:57.037 } 00:16:57.037 [2024-04-18 09:45:20.273074] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:57.037 [2024-04-18 09:45:20.273232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63928 ] 00:16:57.296 [2024-04-18 09:45:20.437634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.296 [2024-04-18 09:45:20.670875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.800  Copying: 1024/1024 [kB] (average 1000 MBps) 00:16:58.800 00:16:58.800 09:45:21 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:16:58.800 09:45:21 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:16:58.800 09:45:21 -- dd/basic_rw.sh@23 -- # count=7 00:16:58.800 09:45:21 -- dd/basic_rw.sh@24 -- # count=7 00:16:58.800 09:45:21 -- dd/basic_rw.sh@25 -- # size=57344 00:16:58.800 09:45:21 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:16:58.800 09:45:21 -- dd/common.sh@98 -- # xtrace_disable 00:16:58.800 09:45:21 -- common/autotest_common.sh@10 -- # set +x 00:16:59.376 09:45:22 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:16:59.376 09:45:22 -- dd/basic_rw.sh@30 -- # gen_conf 00:16:59.376 09:45:22 -- dd/common.sh@31 -- # xtrace_disable 00:16:59.376 09:45:22 -- common/autotest_common.sh@10 -- # set +x 00:16:59.376 { 00:16:59.376 "subsystems": [ 00:16:59.376 { 00:16:59.376 "subsystem": "bdev", 00:16:59.376 "config": [ 00:16:59.376 { 00:16:59.376 "params": { 00:16:59.376 "trtype": "pcie", 00:16:59.376 "traddr": "0000:00:10.0", 00:16:59.376 "name": "Nvme0" 00:16:59.376 }, 00:16:59.376 "method": "bdev_nvme_attach_controller" 00:16:59.376 }, 00:16:59.376 { 00:16:59.376 "method": "bdev_wait_for_examine" 00:16:59.376 } 00:16:59.376 ] 00:16:59.376 } 00:16:59.376 ] 00:16:59.376 } 00:16:59.376 [2024-04-18 09:45:22.698042] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:59.376 [2024-04-18 09:45:22.698306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63959 ] 00:16:59.652 [2024-04-18 09:45:22.879958] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.912 [2024-04-18 09:45:23.164636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.416  Copying: 56/56 [kB] (average 27 MBps) 00:17:01.416 00:17:01.416 09:45:24 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:17:01.416 09:45:24 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:01.416 09:45:24 -- dd/common.sh@31 -- # xtrace_disable 00:17:01.416 09:45:24 -- common/autotest_common.sh@10 -- # set +x 00:17:01.674 { 00:17:01.674 "subsystems": [ 00:17:01.674 { 00:17:01.674 "subsystem": "bdev", 00:17:01.674 "config": [ 00:17:01.674 { 00:17:01.674 "params": { 00:17:01.674 "trtype": "pcie", 00:17:01.674 "traddr": "0000:00:10.0", 00:17:01.674 "name": "Nvme0" 00:17:01.674 }, 00:17:01.674 "method": "bdev_nvme_attach_controller" 00:17:01.674 }, 00:17:01.674 { 00:17:01.674 "method": "bdev_wait_for_examine" 00:17:01.674 } 00:17:01.674 ] 00:17:01.674 } 00:17:01.674 ] 00:17:01.674 } 00:17:01.674 [2024-04-18 09:45:24.867366] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:01.674 [2024-04-18 09:45:24.867572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63990 ] 00:17:01.674 [2024-04-18 09:45:25.040773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.933 [2024-04-18 09:45:25.276388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.436  Copying: 56/56 [kB] (average 27 MBps) 00:17:03.436 00:17:03.436 09:45:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:03.436 09:45:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:03.436 09:45:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:03.436 09:45:26 -- dd/common.sh@11 -- # local nvme_ref= 00:17:03.436 09:45:26 -- dd/common.sh@12 -- # local size=57344 00:17:03.436 09:45:26 -- dd/common.sh@14 -- # local bs=1048576 00:17:03.436 09:45:26 -- dd/common.sh@15 -- # local count=1 00:17:03.436 09:45:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:03.436 09:45:26 -- dd/common.sh@18 -- # gen_conf 00:17:03.436 09:45:26 -- dd/common.sh@31 -- # xtrace_disable 00:17:03.436 09:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.436 { 00:17:03.436 "subsystems": [ 00:17:03.436 { 00:17:03.436 "subsystem": "bdev", 00:17:03.436 "config": [ 00:17:03.436 { 00:17:03.436 "params": { 00:17:03.436 "trtype": "pcie", 00:17:03.436 "traddr": "0000:00:10.0", 00:17:03.436 "name": "Nvme0" 00:17:03.436 }, 00:17:03.436 "method": "bdev_nvme_attach_controller" 00:17:03.436 }, 00:17:03.436 { 00:17:03.436 "method": "bdev_wait_for_examine" 00:17:03.436 } 00:17:03.436 ] 00:17:03.436 } 00:17:03.436 ] 00:17:03.436 } 00:17:03.436 [2024-04-18 09:45:26.726498] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:03.436 [2024-04-18 09:45:26.726671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64023 ] 00:17:03.695 [2024-04-18 09:45:26.901343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.953 [2024-04-18 09:45:27.144358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.589  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:05.589 00:17:05.589 09:45:28 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:05.589 09:45:28 -- dd/basic_rw.sh@23 -- # count=7 00:17:05.589 09:45:28 -- dd/basic_rw.sh@24 -- # count=7 00:17:05.589 09:45:28 -- dd/basic_rw.sh@25 -- # size=57344 00:17:05.589 09:45:28 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:17:05.589 09:45:28 -- dd/common.sh@98 -- # xtrace_disable 00:17:05.589 09:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:06.156 09:45:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:17:06.156 09:45:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:06.156 09:45:29 -- dd/common.sh@31 -- # xtrace_disable 00:17:06.156 09:45:29 -- common/autotest_common.sh@10 -- # set +x 00:17:06.156 { 00:17:06.156 "subsystems": [ 00:17:06.156 { 00:17:06.156 "subsystem": "bdev", 00:17:06.156 "config": [ 00:17:06.156 { 00:17:06.156 "params": { 00:17:06.156 "trtype": "pcie", 00:17:06.156 "traddr": "0000:00:10.0", 00:17:06.156 "name": "Nvme0" 00:17:06.156 }, 00:17:06.156 "method": "bdev_nvme_attach_controller" 00:17:06.156 }, 00:17:06.156 { 00:17:06.156 "method": "bdev_wait_for_examine" 00:17:06.156 } 00:17:06.156 ] 00:17:06.156 } 00:17:06.156 ] 00:17:06.156 } 00:17:06.156 [2024-04-18 09:45:29.443041] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:06.156 [2024-04-18 09:45:29.443241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64060 ] 00:17:06.414 [2024-04-18 09:45:29.619801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.672 [2024-04-18 09:45:29.913516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.865  Copying: 56/56 [kB] (average 54 MBps) 00:17:07.865 00:17:08.123 09:45:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:08.123 09:45:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:17:08.123 09:45:31 -- dd/common.sh@31 -- # xtrace_disable 00:17:08.123 09:45:31 -- common/autotest_common.sh@10 -- # set +x 00:17:08.123 { 00:17:08.123 "subsystems": [ 00:17:08.123 { 00:17:08.123 "subsystem": "bdev", 00:17:08.123 "config": [ 00:17:08.123 { 00:17:08.123 "params": { 00:17:08.123 "trtype": "pcie", 00:17:08.123 "traddr": "0000:00:10.0", 00:17:08.123 "name": "Nvme0" 00:17:08.123 }, 00:17:08.123 "method": "bdev_nvme_attach_controller" 00:17:08.123 }, 00:17:08.123 { 00:17:08.123 "method": "bdev_wait_for_examine" 00:17:08.123 } 00:17:08.123 ] 00:17:08.123 } 00:17:08.123 ] 00:17:08.123 } 00:17:08.123 [2024-04-18 09:45:31.361341] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:08.123 [2024-04-18 09:45:31.361528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64091 ] 00:17:08.381 [2024-04-18 09:45:31.532134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.381 [2024-04-18 09:45:31.767379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.362  Copying: 56/56 [kB] (average 54 MBps) 00:17:10.362 00:17:10.362 09:45:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:10.362 09:45:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:10.362 09:45:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:10.362 09:45:33 -- dd/common.sh@11 -- # local nvme_ref= 00:17:10.362 09:45:33 -- dd/common.sh@12 -- # local size=57344 00:17:10.362 09:45:33 -- dd/common.sh@14 -- # local bs=1048576 00:17:10.362 09:45:33 -- dd/common.sh@15 -- # local count=1 00:17:10.362 09:45:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:10.362 09:45:33 -- dd/common.sh@18 -- # gen_conf 00:17:10.362 09:45:33 -- dd/common.sh@31 -- # xtrace_disable 00:17:10.362 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:17:10.362 { 00:17:10.362 "subsystems": [ 00:17:10.362 { 00:17:10.362 "subsystem": "bdev", 00:17:10.362 "config": [ 00:17:10.362 { 00:17:10.362 "params": { 00:17:10.362 "trtype": "pcie", 00:17:10.362 "traddr": "0000:00:10.0", 00:17:10.362 "name": "Nvme0" 00:17:10.362 }, 00:17:10.362 "method": "bdev_nvme_attach_controller" 00:17:10.362 }, 00:17:10.362 { 00:17:10.362 "method": "bdev_wait_for_examine" 00:17:10.362 } 00:17:10.362 ] 00:17:10.362 } 00:17:10.362 ] 00:17:10.362 } 00:17:10.362 [2024-04-18 09:45:33.471682] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:10.362 [2024-04-18 09:45:33.471861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64124 ] 00:17:10.362 [2024-04-18 09:45:33.649817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.621 [2024-04-18 09:45:33.884297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.123  Copying: 1024/1024 [kB] (average 500 MBps) 00:17:12.123 00:17:12.123 09:45:35 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:12.123 09:45:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:12.123 09:45:35 -- dd/basic_rw.sh@23 -- # count=3 00:17:12.123 09:45:35 -- dd/basic_rw.sh@24 -- # count=3 00:17:12.123 09:45:35 -- dd/basic_rw.sh@25 -- # size=49152 00:17:12.123 09:45:35 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:17:12.123 09:45:35 -- dd/common.sh@98 -- # xtrace_disable 00:17:12.123 09:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 09:45:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:17:12.690 09:45:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:12.690 09:45:35 -- dd/common.sh@31 -- # xtrace_disable 00:17:12.690 09:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 { 00:17:12.690 "subsystems": [ 00:17:12.690 { 00:17:12.690 "subsystem": "bdev", 00:17:12.690 "config": [ 00:17:12.690 { 00:17:12.690 "params": { 00:17:12.690 "trtype": "pcie", 00:17:12.690 "traddr": "0000:00:10.0", 00:17:12.690 "name": "Nvme0" 00:17:12.690 }, 00:17:12.690 "method": "bdev_nvme_attach_controller" 00:17:12.690 }, 00:17:12.690 { 00:17:12.690 "method": "bdev_wait_for_examine" 00:17:12.690 } 00:17:12.690 ] 00:17:12.690 } 00:17:12.690 ] 00:17:12.690 } 00:17:12.690 [2024-04-18 09:45:35.887358] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:12.690 [2024-04-18 09:45:35.887567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64155 ] 00:17:12.690 [2024-04-18 09:45:36.058478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.019 [2024-04-18 09:45:36.304241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.540  Copying: 48/48 [kB] (average 46 MBps) 00:17:14.540 00:17:14.540 09:45:37 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:17:14.540 09:45:37 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:14.540 09:45:37 -- dd/common.sh@31 -- # xtrace_disable 00:17:14.540 09:45:37 -- common/autotest_common.sh@10 -- # set +x 00:17:14.540 { 00:17:14.540 "subsystems": [ 00:17:14.540 { 00:17:14.540 "subsystem": "bdev", 00:17:14.540 "config": [ 00:17:14.540 { 00:17:14.540 "params": { 00:17:14.540 "trtype": "pcie", 00:17:14.540 "traddr": "0000:00:10.0", 00:17:14.540 "name": "Nvme0" 00:17:14.540 }, 00:17:14.540 "method": "bdev_nvme_attach_controller" 00:17:14.540 }, 00:17:14.540 { 00:17:14.540 "method": "bdev_wait_for_examine" 00:17:14.540 } 00:17:14.540 ] 00:17:14.540 } 00:17:14.540 ] 00:17:14.540 } 00:17:14.799 [2024-04-18 09:45:37.931289] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:14.799 [2024-04-18 09:45:37.931453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64186 ] 00:17:14.799 [2024-04-18 09:45:38.101217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.057 [2024-04-18 09:45:38.342415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.594  Copying: 48/48 [kB] (average 23 MBps) 00:17:16.594 00:17:16.594 09:45:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:16.594 09:45:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:17:16.594 09:45:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:16.594 09:45:39 -- dd/common.sh@11 -- # local nvme_ref= 00:17:16.594 09:45:39 -- dd/common.sh@12 -- # local size=49152 00:17:16.594 09:45:39 -- dd/common.sh@14 -- # local bs=1048576 00:17:16.594 09:45:39 -- dd/common.sh@15 -- # local count=1 00:17:16.594 09:45:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:16.594 09:45:39 -- dd/common.sh@18 -- # gen_conf 00:17:16.594 09:45:39 -- dd/common.sh@31 -- # xtrace_disable 00:17:16.594 09:45:39 -- common/autotest_common.sh@10 -- # set +x 00:17:16.594 { 00:17:16.594 "subsystems": [ 00:17:16.594 { 00:17:16.594 "subsystem": "bdev", 00:17:16.594 "config": [ 00:17:16.594 { 00:17:16.594 "params": { 00:17:16.594 "trtype": "pcie", 00:17:16.594 "traddr": "0000:00:10.0", 00:17:16.594 "name": "Nvme0" 00:17:16.594 }, 00:17:16.594 "method": "bdev_nvme_attach_controller" 00:17:16.594 }, 00:17:16.594 { 00:17:16.594 "method": "bdev_wait_for_examine" 00:17:16.594 } 00:17:16.594 ] 00:17:16.594 } 00:17:16.594 ] 00:17:16.594 } 00:17:16.594 [2024-04-18 09:45:39.865507] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:16.594 [2024-04-18 09:45:39.865703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64219 ] 00:17:16.853 [2024-04-18 09:45:40.038585] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.111 [2024-04-18 09:45:40.278656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.743  Copying: 1024/1024 [kB] (average 500 MBps) 00:17:18.743 00:17:18.743 09:45:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:18.743 09:45:41 -- dd/basic_rw.sh@23 -- # count=3 00:17:18.743 09:45:41 -- dd/basic_rw.sh@24 -- # count=3 00:17:18.743 09:45:41 -- dd/basic_rw.sh@25 -- # size=49152 00:17:18.743 09:45:41 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:17:18.743 09:45:41 -- dd/common.sh@98 -- # xtrace_disable 00:17:18.743 09:45:41 -- common/autotest_common.sh@10 -- # set +x 00:17:19.001 09:45:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:17:19.001 09:45:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:19.001 09:45:42 -- dd/common.sh@31 -- # xtrace_disable 00:17:19.001 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:19.001 { 00:17:19.001 "subsystems": [ 00:17:19.001 { 00:17:19.001 "subsystem": "bdev", 00:17:19.001 "config": [ 00:17:19.001 { 00:17:19.001 "params": { 00:17:19.001 "trtype": "pcie", 00:17:19.001 "traddr": "0000:00:10.0", 00:17:19.001 "name": "Nvme0" 00:17:19.001 }, 00:17:19.001 "method": "bdev_nvme_attach_controller" 00:17:19.001 }, 00:17:19.001 { 00:17:19.001 "method": "bdev_wait_for_examine" 00:17:19.001 } 00:17:19.001 ] 00:17:19.001 } 00:17:19.001 ] 00:17:19.001 } 00:17:19.260 [2024-04-18 09:45:42.411804] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:19.260 [2024-04-18 09:45:42.412285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64250 ] 00:17:19.260 [2024-04-18 09:45:42.595803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.518 [2024-04-18 09:45:42.875284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.023  Copying: 48/48 [kB] (average 46 MBps) 00:17:21.023 00:17:21.023 09:45:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:17:21.023 09:45:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:21.023 09:45:44 -- dd/common.sh@31 -- # xtrace_disable 00:17:21.023 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:17:21.023 { 00:17:21.023 "subsystems": [ 00:17:21.023 { 00:17:21.023 "subsystem": "bdev", 00:17:21.023 "config": [ 00:17:21.023 { 00:17:21.023 "params": { 00:17:21.023 "trtype": "pcie", 00:17:21.023 "traddr": "0000:00:10.0", 00:17:21.023 "name": "Nvme0" 00:17:21.023 }, 00:17:21.023 "method": "bdev_nvme_attach_controller" 00:17:21.023 }, 00:17:21.023 { 00:17:21.023 "method": "bdev_wait_for_examine" 00:17:21.023 } 00:17:21.023 ] 00:17:21.023 } 00:17:21.023 ] 00:17:21.023 } 00:17:21.023 [2024-04-18 09:45:44.324522] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:21.023 [2024-04-18 09:45:44.324713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64281 ] 00:17:21.283 [2024-04-18 09:45:44.502799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.541 [2024-04-18 09:45:44.751322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.239  Copying: 48/48 [kB] (average 46 MBps) 00:17:23.239 00:17:23.239 09:45:46 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:23.240 09:45:46 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:17:23.240 09:45:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:23.240 09:45:46 -- dd/common.sh@11 -- # local nvme_ref= 00:17:23.240 09:45:46 -- dd/common.sh@12 -- # local size=49152 00:17:23.240 09:45:46 -- dd/common.sh@14 -- # local bs=1048576 00:17:23.240 09:45:46 -- dd/common.sh@15 -- # local count=1 00:17:23.240 09:45:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:23.240 09:45:46 -- dd/common.sh@18 -- # gen_conf 00:17:23.240 09:45:46 -- dd/common.sh@31 -- # xtrace_disable 00:17:23.240 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:17:23.240 { 00:17:23.240 "subsystems": [ 00:17:23.240 { 00:17:23.240 "subsystem": "bdev", 00:17:23.240 "config": [ 00:17:23.240 { 00:17:23.240 "params": { 00:17:23.240 "trtype": "pcie", 00:17:23.240 "traddr": "0000:00:10.0", 00:17:23.240 "name": "Nvme0" 00:17:23.240 }, 00:17:23.240 "method": "bdev_nvme_attach_controller" 00:17:23.240 }, 00:17:23.240 { 00:17:23.240 "method": "bdev_wait_for_examine" 00:17:23.240 } 00:17:23.240 ] 00:17:23.240 } 00:17:23.240 ] 00:17:23.240 } 00:17:23.240 [2024-04-18 09:45:46.426582] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:23.240 [2024-04-18 09:45:46.426744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64314 ] 00:17:23.240 [2024-04-18 09:45:46.592213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.502 [2024-04-18 09:45:46.834567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.006  Copying: 1024/1024 [kB] (average 500 MBps) 00:17:25.006 00:17:25.006 00:17:25.006 real 0m39.575s 00:17:25.006 user 0m33.062s 00:17:25.006 sys 0m17.369s 00:17:25.006 09:45:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.006 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:17:25.006 ************************************ 00:17:25.006 END TEST dd_rw 00:17:25.006 ************************************ 00:17:25.006 09:45:48 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:17:25.006 09:45:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:25.006 09:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.006 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:17:25.266 ************************************ 00:17:25.266 START TEST dd_rw_offset 00:17:25.266 ************************************ 00:17:25.266 09:45:48 -- common/autotest_common.sh@1111 -- # basic_offset 00:17:25.266 09:45:48 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:17:25.266 09:45:48 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:17:25.266 09:45:48 -- dd/common.sh@98 -- # xtrace_disable 00:17:25.266 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:17:25.266 09:45:48 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:17:25.266 09:45:48 -- dd/basic_rw.sh@56 -- # data=nx16djkna8lridfvqnjuigkj6rvk77oyjzp8ayrne1c5y4c79fadydniqih8l8fu76p8pce8sjxrv3zsiguc8ahejlgm8cqhqctgh5wg31dj4d70abubce2o84kxps08u75wuo8yu7xhx07ca3q7q0q2orrsftee4zqlzw4f18ao95vl7ln5ju9gw5lremg0k4rio9zs4wd2k23hm9ukse0kqzqe7as2kjjt5you14ctqne9tz3sdcxa8w5048oq8sfj35e9rx0iv8m1v1jyqg08my3b7bkzb0a6icz0z10gj6319w75b3prk0gyl4k7cfp5wzzxkbyr3f3zqgmiqz0e29f4c7q3uh0e9xcg88w331usf0ud2tk64q0ve9xopn04v83o3jtmk6djtiyjpj4s8xksnrb9ysspk7bgd2bw8pycapdy7l3kp36uei19u0mb4rgeh9gufgbh5pr59027h74goh2csrv9vf1dro4yx7roc3yjomwrgcw9r00yrhtypolx5dbc98gi5lw5cps85s3b26z0zlv2540jjau6163lila8luqy9znmls0ibc1sfwwzm2avn4v2t90krq08odbjzfwabyabmtq5cnydscjv1h0au3v4phv04lwngaeu4gix4stnhdchnt5z5m55dovbnrqs06a5xnouihn5dlvdd2qcgbrjwnouuv4t0hjv2kgl7vx5pzmla3drytapva6igrntmhbjozy7amuuvxvw4mmo99hplc0gvn7t5av6arnkm6jg3s2o0gnitu7x0rud44vjrl9dpz3dbtna4rwux974oc3744w6sed9kk90tl7f28ftatzt747plqe2dw36geczbzmfxoje93ratolq559cshhrflmtrcxz0pfgtr7mcrpst1qpu1e9576rnr4um73dz6itslrc079g1cszs00znxrazajpooya2ghdwyejfqmcgzctds3utroppvmj16tvs7vwyr318xhqzcymdnz6320o72yis9oqkwdm09h2ik05enocuc3ug28rrlncba10zomzx3pa7g0dks5rm0jq1ftae48jqm7l88qzyy99gyuftfmlcsuq8otdrvgg1ndl6hqetwcz7cu1mwwlvtirluwcxrrfch7qhs0fkzpjoj5ge6my79jqgn2k5deic0c9004mdcr6dqqrqr6g3gjzgf7el5f0qgbzieff6gwwqbjyh5r4cozf4i33uq3jry43cujoxzmo0r7ctczq43ym65kaqatl28ioyfn76qzac3cr256g7lve0cwvx8lf0x0p5ofrmmn5ch11y94pz41e06cnd2zlaewruhxulejasmjk9nspl4q7rvpr1wak4rwutm8qv90dhql8mpz6fyq8p32a2cww2rsyqfhsxeziyvrnqdye5i4he8ayhigosqjd3dx14h4z9lw54bqt96vtb38km62e2fhiawmzyh115d1dz6g6qk6xvd0dt8tahs3rwc2suv2mnadkqxvu7bp4lce7cc6olz879lym0ip0ptlte5jiarqlhp9cxjum8c0ell36oqjsdj1dun7rxahr9yfobls5bnidt4pt2l6l565pto6mfqanyv1ykp3hvdzdse1a657x7zivdkmw4agtdagg10aoabcx726gvb75ctnytkkejftwl13ug4s0kxirmob0d7kethjrnfb0tvusx9zi73od6mtbuowsmmjlh9q572qh7x6arkzfvqzbwz60wnbhnc3n68nmf6ivkx4vyh9jgv8gbbrig3nmutwx13vwlw75a8dq6prgc7s5jba35glkhb2gjo2kgoat702fn5ynareekovrt228v2a8tkxva0r1sei4q5a1ohon7quj7neadkclzhgojlffymu7dhag9yh88sfq0d85zljr92t7zsk0h4dllhofxcrukfp88h05mxhqsmkpw3c5giqzmm4hyncfbte9j2odf0ow7k4cfwy7cdi7x3c4tgqf4oxsp1kl14nxplox0dvnrq40tno38ql6z8re7ckjslpne8s59dphm6s9gfq3tng1l7qdqha8ikfpf0krvqoqu7jfpwl1lv22y75a2uzanlpv6l9c18slqu8699en2qzyad33ncbexprq0jlfq6er3cqua9rqzlw38bpnp01g23krwmtxrwcips6h2zeg9vp88wg83urfte5jeres5eyalg5mdn48ulrtwff57vdrs7czxkbeqribv9qfupr6b2rzkqk8mfyjipdmrjk3z7jgp7xhsjol30k6hsgku9s5j5kjn2md555en8ao2zgpryqohfqpo7kilzzy08txkzoh2419vlt8wo0ogxuxgdtq1agzmv8nyuidgfju2msha9vw3mnsbm77ud2fjsx9j7ff5n1xdknikz56prclntyaxkc39csui02auy27zgiurjdqs078l8hr4b1mt729vlluip20pjcgn3v15hcadxny9etds70pi067i881inzm9orqld3n8jpemum4buo9ec4wbmjyp1qscfgzqpsop79ien8liccfgafup4qnh9bw7zmbv1rt94tzvfha9ce908ifzkh9rnkrbjnso36uc2uh2im1il5pz9txbp3rncv2f7hnjstjpb0vxdeqtya1bh3x07ib5uznvauqpmasws6kuj1lt44lokvl6zif3e65u4750crf53u9id8xedf66hk3uybecmll9euicrpvdrmr84j45dnpjacn9gberwfdza8e6uh4j03ndn71pf4vato9h4y8f1vo0prq7c94xty0y5ggxh9a653dnc6quxmgslkj8d5r5mn6i4odfr0vzrvw4l3jpwednt5s50ggt90hf997nu1q4lyvizxg50agnvv65pi5ic1d047k31n8hw3yxtiqxht95pq6txlkiykntu76kcadrqbo47m1p49u8yeqx9un2g896c8cmv9eim9c5id3vn4hdspq97onv55c6omx341dvw801xi6l77j2kdggg6ruxyq6melw1jf7ap19g3gfhwurcj5tayi60n48hnqrt7u95d0chb5a0n788qxr8ro3gerprsmituz9qirxd2k59pmyczkb446k02hhd633sdviewfgpsndagwgksk4oc5n8z7qwixg1dkretfxl5caiuexm6d23yt22v3l00ns2d023qwakv115q681zs1u7w2jc5xshjp8cv5e2v6ksy151rytncq7a9c540x6nxofiy57mid0kol2aucpngr2aedz0qjkvb4i0cyus9c3ekqnfqz8xvchwwq0pv3vhyhjtivaynnozo959y1s43jn27t5w6mlxbn2fy7dqp3vypmbrsqwb52d7hr3muph94yusc5854ioi9c29eyz2sjs6vrseow4kfmo7yxedffyu6kyua9rkbuyxf1zs4arguaigexo1fj8w2pfq92uzyje0c9jb4vst7jymm8gmlie21jfh089o3353gthhuwaf25d6taw1ta5m5ptib10d46fjvsewz2s1bryqwtmu3ienxo7lzrmxuv2yx1n0l7ef39vajb6e4733pdyxl0cuttkk937zb6lly7d9wv3djydjtjud6sl9q49v9lpm145qgexmpqtm6pumpbn3pnxqx19a7zgcmxmkhq96whes7dwx8vcgv4qd0pil590q7etmnj2rriuluh35ngrh3vc7z0m0s81y7kqwpi1ryosj63s5wob722ue7is0q5ktiqcaspduo0cvyjtw9fp3cuipk2fkmpboxiu764gbnnxbp7d3kan5xrhlv4fapl5ja88nj1dgfj9u8v8xlc9v4tl4av29u2yvl8stocwkj18n2o8n4aczakeo7382cmut66127ysn1zoeoawupwvk7hefpqpzmnuwylm6lgta4ib8xwjiix0uyizqt6mr4klrd5hfve5vvvxrnaz37nu2je673jxckmw34lj0utvwc4b70quf9iixcb9ubnvdxa5ev5pse9pxairpwzxoygr9zl7tuab46vqoo60z02eyiijizb5k7imh8r4ysao40 00:17:25.266 09:45:48 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:17:25.266 09:45:48 -- dd/basic_rw.sh@59 -- # gen_conf 00:17:25.266 09:45:48 -- dd/common.sh@31 -- # xtrace_disable 00:17:25.266 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:17:25.266 { 00:17:25.266 "subsystems": [ 00:17:25.266 { 00:17:25.266 "subsystem": "bdev", 00:17:25.266 "config": [ 00:17:25.266 { 00:17:25.266 "params": { 00:17:25.266 "trtype": "pcie", 00:17:25.266 "traddr": "0000:00:10.0", 00:17:25.266 "name": "Nvme0" 00:17:25.266 }, 00:17:25.266 "method": "bdev_nvme_attach_controller" 00:17:25.266 }, 00:17:25.266 { 00:17:25.266 "method": "bdev_wait_for_examine" 00:17:25.266 } 00:17:25.266 ] 00:17:25.266 } 00:17:25.266 ] 00:17:25.266 } 00:17:25.266 [2024-04-18 09:45:48.541264] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:25.266 [2024-04-18 09:45:48.541408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64367 ] 00:17:25.525 [2024-04-18 09:45:48.707095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.783 [2024-04-18 09:45:48.958305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.418  Copying: 4096/4096 [B] (average 4000 kBps) 00:17:27.418 00:17:27.418 09:45:50 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:17:27.418 09:45:50 -- dd/basic_rw.sh@65 -- # gen_conf 00:17:27.418 09:45:50 -- dd/common.sh@31 -- # xtrace_disable 00:17:27.418 09:45:50 -- common/autotest_common.sh@10 -- # set +x 00:17:27.418 { 00:17:27.418 "subsystems": [ 00:17:27.418 { 00:17:27.418 "subsystem": "bdev", 00:17:27.418 "config": [ 00:17:27.418 { 00:17:27.418 "params": { 00:17:27.418 "trtype": "pcie", 00:17:27.418 "traddr": "0000:00:10.0", 00:17:27.418 "name": "Nvme0" 00:17:27.418 }, 00:17:27.418 "method": "bdev_nvme_attach_controller" 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "method": "bdev_wait_for_examine" 00:17:27.418 } 00:17:27.418 ] 00:17:27.418 } 00:17:27.418 ] 00:17:27.418 } 00:17:27.418 [2024-04-18 09:45:50.633222] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:27.418 [2024-04-18 09:45:50.633429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64403 ] 00:17:27.418 [2024-04-18 09:45:50.799635] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.677 [2024-04-18 09:45:51.038383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.183  Copying: 4096/4096 [B] (average 4000 kBps) 00:17:29.183 00:17:29.183 09:45:52 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:17:29.184 09:45:52 -- dd/basic_rw.sh@72 -- # [[ nx16djkna8lridfvqnjuigkj6rvk77oyjzp8ayrne1c5y4c79fadydniqih8l8fu76p8pce8sjxrv3zsiguc8ahejlgm8cqhqctgh5wg31dj4d70abubce2o84kxps08u75wuo8yu7xhx07ca3q7q0q2orrsftee4zqlzw4f18ao95vl7ln5ju9gw5lremg0k4rio9zs4wd2k23hm9ukse0kqzqe7as2kjjt5you14ctqne9tz3sdcxa8w5048oq8sfj35e9rx0iv8m1v1jyqg08my3b7bkzb0a6icz0z10gj6319w75b3prk0gyl4k7cfp5wzzxkbyr3f3zqgmiqz0e29f4c7q3uh0e9xcg88w331usf0ud2tk64q0ve9xopn04v83o3jtmk6djtiyjpj4s8xksnrb9ysspk7bgd2bw8pycapdy7l3kp36uei19u0mb4rgeh9gufgbh5pr59027h74goh2csrv9vf1dro4yx7roc3yjomwrgcw9r00yrhtypolx5dbc98gi5lw5cps85s3b26z0zlv2540jjau6163lila8luqy9znmls0ibc1sfwwzm2avn4v2t90krq08odbjzfwabyabmtq5cnydscjv1h0au3v4phv04lwngaeu4gix4stnhdchnt5z5m55dovbnrqs06a5xnouihn5dlvdd2qcgbrjwnouuv4t0hjv2kgl7vx5pzmla3drytapva6igrntmhbjozy7amuuvxvw4mmo99hplc0gvn7t5av6arnkm6jg3s2o0gnitu7x0rud44vjrl9dpz3dbtna4rwux974oc3744w6sed9kk90tl7f28ftatzt747plqe2dw36geczbzmfxoje93ratolq559cshhrflmtrcxz0pfgtr7mcrpst1qpu1e9576rnr4um73dz6itslrc079g1cszs00znxrazajpooya2ghdwyejfqmcgzctds3utroppvmj16tvs7vwyr318xhqzcymdnz6320o72yis9oqkwdm09h2ik05enocuc3ug28rrlncba10zomzx3pa7g0dks5rm0jq1ftae48jqm7l88qzyy99gyuftfmlcsuq8otdrvgg1ndl6hqetwcz7cu1mwwlvtirluwcxrrfch7qhs0fkzpjoj5ge6my79jqgn2k5deic0c9004mdcr6dqqrqr6g3gjzgf7el5f0qgbzieff6gwwqbjyh5r4cozf4i33uq3jry43cujoxzmo0r7ctczq43ym65kaqatl28ioyfn76qzac3cr256g7lve0cwvx8lf0x0p5ofrmmn5ch11y94pz41e06cnd2zlaewruhxulejasmjk9nspl4q7rvpr1wak4rwutm8qv90dhql8mpz6fyq8p32a2cww2rsyqfhsxeziyvrnqdye5i4he8ayhigosqjd3dx14h4z9lw54bqt96vtb38km62e2fhiawmzyh115d1dz6g6qk6xvd0dt8tahs3rwc2suv2mnadkqxvu7bp4lce7cc6olz879lym0ip0ptlte5jiarqlhp9cxjum8c0ell36oqjsdj1dun7rxahr9yfobls5bnidt4pt2l6l565pto6mfqanyv1ykp3hvdzdse1a657x7zivdkmw4agtdagg10aoabcx726gvb75ctnytkkejftwl13ug4s0kxirmob0d7kethjrnfb0tvusx9zi73od6mtbuowsmmjlh9q572qh7x6arkzfvqzbwz60wnbhnc3n68nmf6ivkx4vyh9jgv8gbbrig3nmutwx13vwlw75a8dq6prgc7s5jba35glkhb2gjo2kgoat702fn5ynareekovrt228v2a8tkxva0r1sei4q5a1ohon7quj7neadkclzhgojlffymu7dhag9yh88sfq0d85zljr92t7zsk0h4dllhofxcrukfp88h05mxhqsmkpw3c5giqzmm4hyncfbte9j2odf0ow7k4cfwy7cdi7x3c4tgqf4oxsp1kl14nxplox0dvnrq40tno38ql6z8re7ckjslpne8s59dphm6s9gfq3tng1l7qdqha8ikfpf0krvqoqu7jfpwl1lv22y75a2uzanlpv6l9c18slqu8699en2qzyad33ncbexprq0jlfq6er3cqua9rqzlw38bpnp01g23krwmtxrwcips6h2zeg9vp88wg83urfte5jeres5eyalg5mdn48ulrtwff57vdrs7czxkbeqribv9qfupr6b2rzkqk8mfyjipdmrjk3z7jgp7xhsjol30k6hsgku9s5j5kjn2md555en8ao2zgpryqohfqpo7kilzzy08txkzoh2419vlt8wo0ogxuxgdtq1agzmv8nyuidgfju2msha9vw3mnsbm77ud2fjsx9j7ff5n1xdknikz56prclntyaxkc39csui02auy27zgiurjdqs078l8hr4b1mt729vlluip20pjcgn3v15hcadxny9etds70pi067i881inzm9orqld3n8jpemum4buo9ec4wbmjyp1qscfgzqpsop79ien8liccfgafup4qnh9bw7zmbv1rt94tzvfha9ce908ifzkh9rnkrbjnso36uc2uh2im1il5pz9txbp3rncv2f7hnjstjpb0vxdeqtya1bh3x07ib5uznvauqpmasws6kuj1lt44lokvl6zif3e65u4750crf53u9id8xedf66hk3uybecmll9euicrpvdrmr84j45dnpjacn9gberwfdza8e6uh4j03ndn71pf4vato9h4y8f1vo0prq7c94xty0y5ggxh9a653dnc6quxmgslkj8d5r5mn6i4odfr0vzrvw4l3jpwednt5s50ggt90hf997nu1q4lyvizxg50agnvv65pi5ic1d047k31n8hw3yxtiqxht95pq6txlkiykntu76kcadrqbo47m1p49u8yeqx9un2g896c8cmv9eim9c5id3vn4hdspq97onv55c6omx341dvw801xi6l77j2kdggg6ruxyq6melw1jf7ap19g3gfhwurcj5tayi60n48hnqrt7u95d0chb5a0n788qxr8ro3gerprsmituz9qirxd2k59pmyczkb446k02hhd633sdviewfgpsndagwgksk4oc5n8z7qwixg1dkretfxl5caiuexm6d23yt22v3l00ns2d023qwakv115q681zs1u7w2jc5xshjp8cv5e2v6ksy151rytncq7a9c540x6nxofiy57mid0kol2aucpngr2aedz0qjkvb4i0cyus9c3ekqnfqz8xvchwwq0pv3vhyhjtivaynnozo959y1s43jn27t5w6mlxbn2fy7dqp3vypmbrsqwb52d7hr3muph94yusc5854ioi9c29eyz2sjs6vrseow4kfmo7yxedffyu6kyua9rkbuyxf1zs4arguaigexo1fj8w2pfq92uzyje0c9jb4vst7jymm8gmlie21jfh089o3353gthhuwaf25d6taw1ta5m5ptib10d46fjvsewz2s1bryqwtmu3ienxo7lzrmxuv2yx1n0l7ef39vajb6e4733pdyxl0cuttkk937zb6lly7d9wv3djydjtjud6sl9q49v9lpm145qgexmpqtm6pumpbn3pnxqx19a7zgcmxmkhq96whes7dwx8vcgv4qd0pil590q7etmnj2rriuluh35ngrh3vc7z0m0s81y7kqwpi1ryosj63s5wob722ue7is0q5ktiqcaspduo0cvyjtw9fp3cuipk2fkmpboxiu764gbnnxbp7d3kan5xrhlv4fapl5ja88nj1dgfj9u8v8xlc9v4tl4av29u2yvl8stocwkj18n2o8n4aczakeo7382cmut66127ysn1zoeoawupwvk7hefpqpzmnuwylm6lgta4ib8xwjiix0uyizqt6mr4klrd5hfve5vvvxrnaz37nu2je673jxckmw34lj0utvwc4b70quf9iixcb9ubnvdxa5ev5pse9pxairpwzxoygr9zl7tuab46vqoo60z02eyiijizb5k7imh8r4ysao40 == \n\x\1\6\d\j\k\n\a\8\l\r\i\d\f\v\q\n\j\u\i\g\k\j\6\r\v\k\7\7\o\y\j\z\p\8\a\y\r\n\e\1\c\5\y\4\c\7\9\f\a\d\y\d\n\i\q\i\h\8\l\8\f\u\7\6\p\8\p\c\e\8\s\j\x\r\v\3\z\s\i\g\u\c\8\a\h\e\j\l\g\m\8\c\q\h\q\c\t\g\h\5\w\g\3\1\d\j\4\d\7\0\a\b\u\b\c\e\2\o\8\4\k\x\p\s\0\8\u\7\5\w\u\o\8\y\u\7\x\h\x\0\7\c\a\3\q\7\q\0\q\2\o\r\r\s\f\t\e\e\4\z\q\l\z\w\4\f\1\8\a\o\9\5\v\l\7\l\n\5\j\u\9\g\w\5\l\r\e\m\g\0\k\4\r\i\o\9\z\s\4\w\d\2\k\2\3\h\m\9\u\k\s\e\0\k\q\z\q\e\7\a\s\2\k\j\j\t\5\y\o\u\1\4\c\t\q\n\e\9\t\z\3\s\d\c\x\a\8\w\5\0\4\8\o\q\8\s\f\j\3\5\e\9\r\x\0\i\v\8\m\1\v\1\j\y\q\g\0\8\m\y\3\b\7\b\k\z\b\0\a\6\i\c\z\0\z\1\0\g\j\6\3\1\9\w\7\5\b\3\p\r\k\0\g\y\l\4\k\7\c\f\p\5\w\z\z\x\k\b\y\r\3\f\3\z\q\g\m\i\q\z\0\e\2\9\f\4\c\7\q\3\u\h\0\e\9\x\c\g\8\8\w\3\3\1\u\s\f\0\u\d\2\t\k\6\4\q\0\v\e\9\x\o\p\n\0\4\v\8\3\o\3\j\t\m\k\6\d\j\t\i\y\j\p\j\4\s\8\x\k\s\n\r\b\9\y\s\s\p\k\7\b\g\d\2\b\w\8\p\y\c\a\p\d\y\7\l\3\k\p\3\6\u\e\i\1\9\u\0\m\b\4\r\g\e\h\9\g\u\f\g\b\h\5\p\r\5\9\0\2\7\h\7\4\g\o\h\2\c\s\r\v\9\v\f\1\d\r\o\4\y\x\7\r\o\c\3\y\j\o\m\w\r\g\c\w\9\r\0\0\y\r\h\t\y\p\o\l\x\5\d\b\c\9\8\g\i\5\l\w\5\c\p\s\8\5\s\3\b\2\6\z\0\z\l\v\2\5\4\0\j\j\a\u\6\1\6\3\l\i\l\a\8\l\u\q\y\9\z\n\m\l\s\0\i\b\c\1\s\f\w\w\z\m\2\a\v\n\4\v\2\t\9\0\k\r\q\0\8\o\d\b\j\z\f\w\a\b\y\a\b\m\t\q\5\c\n\y\d\s\c\j\v\1\h\0\a\u\3\v\4\p\h\v\0\4\l\w\n\g\a\e\u\4\g\i\x\4\s\t\n\h\d\c\h\n\t\5\z\5\m\5\5\d\o\v\b\n\r\q\s\0\6\a\5\x\n\o\u\i\h\n\5\d\l\v\d\d\2\q\c\g\b\r\j\w\n\o\u\u\v\4\t\0\h\j\v\2\k\g\l\7\v\x\5\p\z\m\l\a\3\d\r\y\t\a\p\v\a\6\i\g\r\n\t\m\h\b\j\o\z\y\7\a\m\u\u\v\x\v\w\4\m\m\o\9\9\h\p\l\c\0\g\v\n\7\t\5\a\v\6\a\r\n\k\m\6\j\g\3\s\2\o\0\g\n\i\t\u\7\x\0\r\u\d\4\4\v\j\r\l\9\d\p\z\3\d\b\t\n\a\4\r\w\u\x\9\7\4\o\c\3\7\4\4\w\6\s\e\d\9\k\k\9\0\t\l\7\f\2\8\f\t\a\t\z\t\7\4\7\p\l\q\e\2\d\w\3\6\g\e\c\z\b\z\m\f\x\o\j\e\9\3\r\a\t\o\l\q\5\5\9\c\s\h\h\r\f\l\m\t\r\c\x\z\0\p\f\g\t\r\7\m\c\r\p\s\t\1\q\p\u\1\e\9\5\7\6\r\n\r\4\u\m\7\3\d\z\6\i\t\s\l\r\c\0\7\9\g\1\c\s\z\s\0\0\z\n\x\r\a\z\a\j\p\o\o\y\a\2\g\h\d\w\y\e\j\f\q\m\c\g\z\c\t\d\s\3\u\t\r\o\p\p\v\m\j\1\6\t\v\s\7\v\w\y\r\3\1\8\x\h\q\z\c\y\m\d\n\z\6\3\2\0\o\7\2\y\i\s\9\o\q\k\w\d\m\0\9\h\2\i\k\0\5\e\n\o\c\u\c\3\u\g\2\8\r\r\l\n\c\b\a\1\0\z\o\m\z\x\3\p\a\7\g\0\d\k\s\5\r\m\0\j\q\1\f\t\a\e\4\8\j\q\m\7\l\8\8\q\z\y\y\9\9\g\y\u\f\t\f\m\l\c\s\u\q\8\o\t\d\r\v\g\g\1\n\d\l\6\h\q\e\t\w\c\z\7\c\u\1\m\w\w\l\v\t\i\r\l\u\w\c\x\r\r\f\c\h\7\q\h\s\0\f\k\z\p\j\o\j\5\g\e\6\m\y\7\9\j\q\g\n\2\k\5\d\e\i\c\0\c\9\0\0\4\m\d\c\r\6\d\q\q\r\q\r\6\g\3\g\j\z\g\f\7\e\l\5\f\0\q\g\b\z\i\e\f\f\6\g\w\w\q\b\j\y\h\5\r\4\c\o\z\f\4\i\3\3\u\q\3\j\r\y\4\3\c\u\j\o\x\z\m\o\0\r\7\c\t\c\z\q\4\3\y\m\6\5\k\a\q\a\t\l\2\8\i\o\y\f\n\7\6\q\z\a\c\3\c\r\2\5\6\g\7\l\v\e\0\c\w\v\x\8\l\f\0\x\0\p\5\o\f\r\m\m\n\5\c\h\1\1\y\9\4\p\z\4\1\e\0\6\c\n\d\2\z\l\a\e\w\r\u\h\x\u\l\e\j\a\s\m\j\k\9\n\s\p\l\4\q\7\r\v\p\r\1\w\a\k\4\r\w\u\t\m\8\q\v\9\0\d\h\q\l\8\m\p\z\6\f\y\q\8\p\3\2\a\2\c\w\w\2\r\s\y\q\f\h\s\x\e\z\i\y\v\r\n\q\d\y\e\5\i\4\h\e\8\a\y\h\i\g\o\s\q\j\d\3\d\x\1\4\h\4\z\9\l\w\5\4\b\q\t\9\6\v\t\b\3\8\k\m\6\2\e\2\f\h\i\a\w\m\z\y\h\1\1\5\d\1\d\z\6\g\6\q\k\6\x\v\d\0\d\t\8\t\a\h\s\3\r\w\c\2\s\u\v\2\m\n\a\d\k\q\x\v\u\7\b\p\4\l\c\e\7\c\c\6\o\l\z\8\7\9\l\y\m\0\i\p\0\p\t\l\t\e\5\j\i\a\r\q\l\h\p\9\c\x\j\u\m\8\c\0\e\l\l\3\6\o\q\j\s\d\j\1\d\u\n\7\r\x\a\h\r\9\y\f\o\b\l\s\5\b\n\i\d\t\4\p\t\2\l\6\l\5\6\5\p\t\o\6\m\f\q\a\n\y\v\1\y\k\p\3\h\v\d\z\d\s\e\1\a\6\5\7\x\7\z\i\v\d\k\m\w\4\a\g\t\d\a\g\g\1\0\a\o\a\b\c\x\7\2\6\g\v\b\7\5\c\t\n\y\t\k\k\e\j\f\t\w\l\1\3\u\g\4\s\0\k\x\i\r\m\o\b\0\d\7\k\e\t\h\j\r\n\f\b\0\t\v\u\s\x\9\z\i\7\3\o\d\6\m\t\b\u\o\w\s\m\m\j\l\h\9\q\5\7\2\q\h\7\x\6\a\r\k\z\f\v\q\z\b\w\z\6\0\w\n\b\h\n\c\3\n\6\8\n\m\f\6\i\v\k\x\4\v\y\h\9\j\g\v\8\g\b\b\r\i\g\3\n\m\u\t\w\x\1\3\v\w\l\w\7\5\a\8\d\q\6\p\r\g\c\7\s\5\j\b\a\3\5\g\l\k\h\b\2\g\j\o\2\k\g\o\a\t\7\0\2\f\n\5\y\n\a\r\e\e\k\o\v\r\t\2\2\8\v\2\a\8\t\k\x\v\a\0\r\1\s\e\i\4\q\5\a\1\o\h\o\n\7\q\u\j\7\n\e\a\d\k\c\l\z\h\g\o\j\l\f\f\y\m\u\7\d\h\a\g\9\y\h\8\8\s\f\q\0\d\8\5\z\l\j\r\9\2\t\7\z\s\k\0\h\4\d\l\l\h\o\f\x\c\r\u\k\f\p\8\8\h\0\5\m\x\h\q\s\m\k\p\w\3\c\5\g\i\q\z\m\m\4\h\y\n\c\f\b\t\e\9\j\2\o\d\f\0\o\w\7\k\4\c\f\w\y\7\c\d\i\7\x\3\c\4\t\g\q\f\4\o\x\s\p\1\k\l\1\4\n\x\p\l\o\x\0\d\v\n\r\q\4\0\t\n\o\3\8\q\l\6\z\8\r\e\7\c\k\j\s\l\p\n\e\8\s\5\9\d\p\h\m\6\s\9\g\f\q\3\t\n\g\1\l\7\q\d\q\h\a\8\i\k\f\p\f\0\k\r\v\q\o\q\u\7\j\f\p\w\l\1\l\v\2\2\y\7\5\a\2\u\z\a\n\l\p\v\6\l\9\c\1\8\s\l\q\u\8\6\9\9\e\n\2\q\z\y\a\d\3\3\n\c\b\e\x\p\r\q\0\j\l\f\q\6\e\r\3\c\q\u\a\9\r\q\z\l\w\3\8\b\p\n\p\0\1\g\2\3\k\r\w\m\t\x\r\w\c\i\p\s\6\h\2\z\e\g\9\v\p\8\8\w\g\8\3\u\r\f\t\e\5\j\e\r\e\s\5\e\y\a\l\g\5\m\d\n\4\8\u\l\r\t\w\f\f\5\7\v\d\r\s\7\c\z\x\k\b\e\q\r\i\b\v\9\q\f\u\p\r\6\b\2\r\z\k\q\k\8\m\f\y\j\i\p\d\m\r\j\k\3\z\7\j\g\p\7\x\h\s\j\o\l\3\0\k\6\h\s\g\k\u\9\s\5\j\5\k\j\n\2\m\d\5\5\5\e\n\8\a\o\2\z\g\p\r\y\q\o\h\f\q\p\o\7\k\i\l\z\z\y\0\8\t\x\k\z\o\h\2\4\1\9\v\l\t\8\w\o\0\o\g\x\u\x\g\d\t\q\1\a\g\z\m\v\8\n\y\u\i\d\g\f\j\u\2\m\s\h\a\9\v\w\3\m\n\s\b\m\7\7\u\d\2\f\j\s\x\9\j\7\f\f\5\n\1\x\d\k\n\i\k\z\5\6\p\r\c\l\n\t\y\a\x\k\c\3\9\c\s\u\i\0\2\a\u\y\2\7\z\g\i\u\r\j\d\q\s\0\7\8\l\8\h\r\4\b\1\m\t\7\2\9\v\l\l\u\i\p\2\0\p\j\c\g\n\3\v\1\5\h\c\a\d\x\n\y\9\e\t\d\s\7\0\p\i\0\6\7\i\8\8\1\i\n\z\m\9\o\r\q\l\d\3\n\8\j\p\e\m\u\m\4\b\u\o\9\e\c\4\w\b\m\j\y\p\1\q\s\c\f\g\z\q\p\s\o\p\7\9\i\e\n\8\l\i\c\c\f\g\a\f\u\p\4\q\n\h\9\b\w\7\z\m\b\v\1\r\t\9\4\t\z\v\f\h\a\9\c\e\9\0\8\i\f\z\k\h\9\r\n\k\r\b\j\n\s\o\3\6\u\c\2\u\h\2\i\m\1\i\l\5\p\z\9\t\x\b\p\3\r\n\c\v\2\f\7\h\n\j\s\t\j\p\b\0\v\x\d\e\q\t\y\a\1\b\h\3\x\0\7\i\b\5\u\z\n\v\a\u\q\p\m\a\s\w\s\6\k\u\j\1\l\t\4\4\l\o\k\v\l\6\z\i\f\3\e\6\5\u\4\7\5\0\c\r\f\5\3\u\9\i\d\8\x\e\d\f\6\6\h\k\3\u\y\b\e\c\m\l\l\9\e\u\i\c\r\p\v\d\r\m\r\8\4\j\4\5\d\n\p\j\a\c\n\9\g\b\e\r\w\f\d\z\a\8\e\6\u\h\4\j\0\3\n\d\n\7\1\p\f\4\v\a\t\o\9\h\4\y\8\f\1\v\o\0\p\r\q\7\c\9\4\x\t\y\0\y\5\g\g\x\h\9\a\6\5\3\d\n\c\6\q\u\x\m\g\s\l\k\j\8\d\5\r\5\m\n\6\i\4\o\d\f\r\0\v\z\r\v\w\4\l\3\j\p\w\e\d\n\t\5\s\5\0\g\g\t\9\0\h\f\9\9\7\n\u\1\q\4\l\y\v\i\z\x\g\5\0\a\g\n\v\v\6\5\p\i\5\i\c\1\d\0\4\7\k\3\1\n\8\h\w\3\y\x\t\i\q\x\h\t\9\5\p\q\6\t\x\l\k\i\y\k\n\t\u\7\6\k\c\a\d\r\q\b\o\4\7\m\1\p\4\9\u\8\y\e\q\x\9\u\n\2\g\8\9\6\c\8\c\m\v\9\e\i\m\9\c\5\i\d\3\v\n\4\h\d\s\p\q\9\7\o\n\v\5\5\c\6\o\m\x\3\4\1\d\v\w\8\0\1\x\i\6\l\7\7\j\2\k\d\g\g\g\6\r\u\x\y\q\6\m\e\l\w\1\j\f\7\a\p\1\9\g\3\g\f\h\w\u\r\c\j\5\t\a\y\i\6\0\n\4\8\h\n\q\r\t\7\u\9\5\d\0\c\h\b\5\a\0\n\7\8\8\q\x\r\8\r\o\3\g\e\r\p\r\s\m\i\t\u\z\9\q\i\r\x\d\2\k\5\9\p\m\y\c\z\k\b\4\4\6\k\0\2\h\h\d\6\3\3\s\d\v\i\e\w\f\g\p\s\n\d\a\g\w\g\k\s\k\4\o\c\5\n\8\z\7\q\w\i\x\g\1\d\k\r\e\t\f\x\l\5\c\a\i\u\e\x\m\6\d\2\3\y\t\2\2\v\3\l\0\0\n\s\2\d\0\2\3\q\w\a\k\v\1\1\5\q\6\8\1\z\s\1\u\7\w\2\j\c\5\x\s\h\j\p\8\c\v\5\e\2\v\6\k\s\y\1\5\1\r\y\t\n\c\q\7\a\9\c\5\4\0\x\6\n\x\o\f\i\y\5\7\m\i\d\0\k\o\l\2\a\u\c\p\n\g\r\2\a\e\d\z\0\q\j\k\v\b\4\i\0\c\y\u\s\9\c\3\e\k\q\n\f\q\z\8\x\v\c\h\w\w\q\0\p\v\3\v\h\y\h\j\t\i\v\a\y\n\n\o\z\o\9\5\9\y\1\s\4\3\j\n\2\7\t\5\w\6\m\l\x\b\n\2\f\y\7\d\q\p\3\v\y\p\m\b\r\s\q\w\b\5\2\d\7\h\r\3\m\u\p\h\9\4\y\u\s\c\5\8\5\4\i\o\i\9\c\2\9\e\y\z\2\s\j\s\6\v\r\s\e\o\w\4\k\f\m\o\7\y\x\e\d\f\f\y\u\6\k\y\u\a\9\r\k\b\u\y\x\f\1\z\s\4\a\r\g\u\a\i\g\e\x\o\1\f\j\8\w\2\p\f\q\9\2\u\z\y\j\e\0\c\9\j\b\4\v\s\t\7\j\y\m\m\8\g\m\l\i\e\2\1\j\f\h\0\8\9\o\3\3\5\3\g\t\h\h\u\w\a\f\2\5\d\6\t\a\w\1\t\a\5\m\5\p\t\i\b\1\0\d\4\6\f\j\v\s\e\w\z\2\s\1\b\r\y\q\w\t\m\u\3\i\e\n\x\o\7\l\z\r\m\x\u\v\2\y\x\1\n\0\l\7\e\f\3\9\v\a\j\b\6\e\4\7\3\3\p\d\y\x\l\0\c\u\t\t\k\k\9\3\7\z\b\6\l\l\y\7\d\9\w\v\3\d\j\y\d\j\t\j\u\d\6\s\l\9\q\4\9\v\9\l\p\m\1\4\5\q\g\e\x\m\p\q\t\m\6\p\u\m\p\b\n\3\p\n\x\q\x\1\9\a\7\z\g\c\m\x\m\k\h\q\9\6\w\h\e\s\7\d\w\x\8\v\c\g\v\4\q\d\0\p\i\l\5\9\0\q\7\e\t\m\n\j\2\r\r\i\u\l\u\h\3\5\n\g\r\h\3\v\c\7\z\0\m\0\s\8\1\y\7\k\q\w\p\i\1\r\y\o\s\j\6\3\s\5\w\o\b\7\2\2\u\e\7\i\s\0\q\5\k\t\i\q\c\a\s\p\d\u\o\0\c\v\y\j\t\w\9\f\p\3\c\u\i\p\k\2\f\k\m\p\b\o\x\i\u\7\6\4\g\b\n\n\x\b\p\7\d\3\k\a\n\5\x\r\h\l\v\4\f\a\p\l\5\j\a\8\8\n\j\1\d\g\f\j\9\u\8\v\8\x\l\c\9\v\4\t\l\4\a\v\2\9\u\2\y\v\l\8\s\t\o\c\w\k\j\1\8\n\2\o\8\n\4\a\c\z\a\k\e\o\7\3\8\2\c\m\u\t\6\6\1\2\7\y\s\n\1\z\o\e\o\a\w\u\p\w\v\k\7\h\e\f\p\q\p\z\m\n\u\w\y\l\m\6\l\g\t\a\4\i\b\8\x\w\j\i\i\x\0\u\y\i\z\q\t\6\m\r\4\k\l\r\d\5\h\f\v\e\5\v\v\v\x\r\n\a\z\3\7\n\u\2\j\e\6\7\3\j\x\c\k\m\w\3\4\l\j\0\u\t\v\w\c\4\b\7\0\q\u\f\9\i\i\x\c\b\9\u\b\n\v\d\x\a\5\e\v\5\p\s\e\9\p\x\a\i\r\p\w\z\x\o\y\g\r\9\z\l\7\t\u\a\b\4\6\v\q\o\o\6\0\z\0\2\e\y\i\i\j\i\z\b\5\k\7\i\m\h\8\r\4\y\s\a\o\4\0 ]] 00:17:29.184 00:17:29.184 real 0m4.077s 00:17:29.184 user 0m3.395s 00:17:29.184 sys 0m1.962s 00:17:29.184 09:45:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:29.184 09:45:52 -- common/autotest_common.sh@10 -- # set +x 00:17:29.184 ************************************ 00:17:29.184 END TEST dd_rw_offset 00:17:29.184 ************************************ 00:17:29.184 09:45:52 -- dd/basic_rw.sh@1 -- # cleanup 00:17:29.184 09:45:52 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:17:29.184 09:45:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:29.184 09:45:52 -- dd/common.sh@11 -- # local nvme_ref= 00:17:29.184 09:45:52 -- dd/common.sh@12 -- # local size=0xffff 00:17:29.184 09:45:52 -- dd/common.sh@14 -- # local bs=1048576 00:17:29.184 09:45:52 -- dd/common.sh@15 -- # local count=1 00:17:29.184 09:45:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:29.184 09:45:52 -- dd/common.sh@18 -- # gen_conf 00:17:29.184 09:45:52 -- dd/common.sh@31 -- # xtrace_disable 00:17:29.184 09:45:52 -- common/autotest_common.sh@10 -- # set +x 00:17:29.184 { 00:17:29.184 "subsystems": [ 00:17:29.184 { 00:17:29.184 "subsystem": "bdev", 00:17:29.184 "config": [ 00:17:29.184 { 00:17:29.184 "params": { 00:17:29.184 "trtype": "pcie", 00:17:29.184 "traddr": "0000:00:10.0", 00:17:29.184 "name": "Nvme0" 00:17:29.184 }, 00:17:29.184 "method": "bdev_nvme_attach_controller" 00:17:29.184 }, 00:17:29.184 { 00:17:29.184 "method": "bdev_wait_for_examine" 00:17:29.184 } 00:17:29.184 ] 00:17:29.184 } 00:17:29.184 ] 00:17:29.184 } 00:17:29.443 [2024-04-18 09:45:52.617225] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:29.443 [2024-04-18 09:45:52.617391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64444 ] 00:17:29.443 [2024-04-18 09:45:52.790309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.702 [2024-04-18 09:45:53.026735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.223  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:31.223 00:17:31.223 09:45:54 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:31.223 ************************************ 00:17:31.223 END TEST spdk_dd_basic_rw 00:17:31.223 ************************************ 00:17:31.223 00:17:31.223 real 0m48.400s 00:17:31.223 user 0m40.062s 00:17:31.223 sys 0m20.975s 00:17:31.223 09:45:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.223 09:45:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.223 09:45:54 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:17:31.223 09:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:31.223 09:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.223 09:45:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 ************************************ 00:17:31.482 START TEST spdk_dd_posix 00:17:31.482 ************************************ 00:17:31.482 09:45:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:17:31.482 * Looking for test storage... 00:17:31.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:31.482 09:45:54 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.482 09:45:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.482 09:45:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.482 09:45:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.482 09:45:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.482 09:45:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.482 09:45:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.482 09:45:54 -- paths/export.sh@5 -- # export PATH 00:17:31.482 09:45:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.482 09:45:54 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:17:31.482 09:45:54 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:17:31.482 09:45:54 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:17:31.482 09:45:54 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:17:31.482 09:45:54 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:31.482 09:45:54 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:31.482 09:45:54 -- dd/posix.sh@130 -- # tests 00:17:31.482 09:45:54 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:17:31.482 * First test run, liburing in use 00:17:31.482 09:45:54 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:17:31.482 09:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:31.482 09:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.482 09:45:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 ************************************ 00:17:31.482 START TEST dd_flag_append 00:17:31.482 ************************************ 00:17:31.482 09:45:54 -- common/autotest_common.sh@1111 -- # append 00:17:31.482 09:45:54 -- dd/posix.sh@16 -- # local dump0 00:17:31.482 09:45:54 -- dd/posix.sh@17 -- # local dump1 00:17:31.482 09:45:54 -- dd/posix.sh@19 -- # gen_bytes 32 00:17:31.482 09:45:54 -- dd/common.sh@98 -- # xtrace_disable 00:17:31.482 09:45:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 09:45:54 -- dd/posix.sh@19 -- # dump0=6k4w5lf13asrll9oddk3jqw3mntzogd6 00:17:31.482 09:45:54 -- dd/posix.sh@20 -- # gen_bytes 32 00:17:31.482 09:45:54 -- dd/common.sh@98 -- # xtrace_disable 00:17:31.483 09:45:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.483 09:45:54 -- dd/posix.sh@20 -- # dump1=48wu91z5uyarz53bb6d41nsvg2hk8mj6 00:17:31.483 09:45:54 -- dd/posix.sh@22 -- # printf %s 6k4w5lf13asrll9oddk3jqw3mntzogd6 00:17:31.483 09:45:54 -- dd/posix.sh@23 -- # printf %s 48wu91z5uyarz53bb6d41nsvg2hk8mj6 00:17:31.483 09:45:54 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:17:31.742 [2024-04-18 09:45:54.962737] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:31.742 [2024-04-18 09:45:54.963144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64536 ] 00:17:32.001 [2024-04-18 09:45:55.135541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.001 [2024-04-18 09:45:55.378132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.504  Copying: 32/32 [B] (average 31 kBps) 00:17:33.504 00:17:33.504 ************************************ 00:17:33.504 END TEST dd_flag_append 00:17:33.504 ************************************ 00:17:33.504 09:45:56 -- dd/posix.sh@27 -- # [[ 48wu91z5uyarz53bb6d41nsvg2hk8mj66k4w5lf13asrll9oddk3jqw3mntzogd6 == \4\8\w\u\9\1\z\5\u\y\a\r\z\5\3\b\b\6\d\4\1\n\s\v\g\2\h\k\8\m\j\6\6\k\4\w\5\l\f\1\3\a\s\r\l\l\9\o\d\d\k\3\j\q\w\3\m\n\t\z\o\g\d\6 ]] 00:17:33.504 00:17:33.504 real 0m2.013s 00:17:33.504 user 0m1.620s 00:17:33.504 sys 0m1.034s 00:17:33.504 09:45:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.504 09:45:56 -- common/autotest_common.sh@10 -- # set +x 00:17:33.763 09:45:56 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:17:33.763 09:45:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:33.763 09:45:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.763 09:45:56 -- common/autotest_common.sh@10 -- # set +x 00:17:33.763 ************************************ 00:17:33.763 START TEST dd_flag_directory 00:17:33.763 ************************************ 00:17:33.763 09:45:56 -- common/autotest_common.sh@1111 -- # directory 00:17:33.763 09:45:56 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:33.763 09:45:56 -- common/autotest_common.sh@638 -- # local es=0 00:17:33.763 09:45:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:33.763 09:45:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:33.763 09:45:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:33.763 09:45:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:33.763 09:45:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:33.763 09:45:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:33.763 09:45:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:33.763 09:45:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:33.763 09:45:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:33.763 09:45:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:33.763 [2024-04-18 09:45:57.075228] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:33.763 [2024-04-18 09:45:57.075380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64589 ] 00:17:34.021 [2024-04-18 09:45:57.237604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.279 [2024-04-18 09:45:57.469968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.538 [2024-04-18 09:45:57.784160] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:34.538 [2024-04-18 09:45:57.784258] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:34.538 [2024-04-18 09:45:57.784320] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:35.474 [2024-04-18 09:45:58.521071] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:35.733 09:45:58 -- common/autotest_common.sh@641 -- # es=236 00:17:35.733 09:45:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:35.733 09:45:58 -- common/autotest_common.sh@650 -- # es=108 00:17:35.733 09:45:58 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:35.733 09:45:58 -- common/autotest_common.sh@658 -- # es=1 00:17:35.733 09:45:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:35.733 09:45:58 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:17:35.733 09:45:58 -- common/autotest_common.sh@638 -- # local es=0 00:17:35.733 09:45:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:17:35.733 09:45:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:35.733 09:45:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:35.733 09:45:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:35.733 09:45:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:35.733 09:45:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:35.733 09:45:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:35.733 09:45:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:35.733 09:45:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:35.733 09:45:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:17:35.733 [2024-04-18 09:45:59.032908] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:35.733 [2024-04-18 09:45:59.033092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64616 ] 00:17:35.993 [2024-04-18 09:45:59.205161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.252 [2024-04-18 09:45:59.444024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.511 [2024-04-18 09:45:59.758136] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:36.511 [2024-04-18 09:45:59.758237] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:36.511 [2024-04-18 09:45:59.758283] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:37.447 [2024-04-18 09:46:00.496237] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:37.707 09:46:00 -- common/autotest_common.sh@641 -- # es=236 00:17:37.707 09:46:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:37.707 09:46:00 -- common/autotest_common.sh@650 -- # es=108 00:17:37.707 09:46:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:37.707 09:46:00 -- common/autotest_common.sh@658 -- # es=1 00:17:37.707 09:46:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:37.707 00:17:37.707 real 0m3.919s 00:17:37.707 user 0m3.187s 00:17:37.707 sys 0m0.507s 00:17:37.707 09:46:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.707 ************************************ 00:17:37.707 END TEST dd_flag_directory 00:17:37.707 ************************************ 00:17:37.707 09:46:00 -- common/autotest_common.sh@10 -- # set +x 00:17:37.707 09:46:00 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:17:37.707 09:46:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:37.707 09:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.707 09:46:00 -- common/autotest_common.sh@10 -- # set +x 00:17:37.707 ************************************ 00:17:37.707 START TEST dd_flag_nofollow 00:17:37.707 ************************************ 00:17:37.707 09:46:01 -- common/autotest_common.sh@1111 -- # nofollow 00:17:37.707 09:46:01 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:17:37.707 09:46:01 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:17:37.707 09:46:01 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:17:37.707 09:46:01 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:17:37.707 09:46:01 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:37.707 09:46:01 -- common/autotest_common.sh@638 -- # local es=0 00:17:37.707 09:46:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:37.707 09:46:01 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.707 09:46:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:37.707 09:46:01 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.707 09:46:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:37.707 09:46:01 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.707 09:46:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:37.707 09:46:01 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.707 09:46:01 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:37.707 09:46:01 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:37.966 [2024-04-18 09:46:01.115831] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:37.966 [2024-04-18 09:46:01.116012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64667 ] 00:17:37.966 [2024-04-18 09:46:01.291980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.226 [2024-04-18 09:46:01.561503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.484 [2024-04-18 09:46:01.874813] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:17:38.484 [2024-04-18 09:46:01.874881] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:17:38.484 [2024-04-18 09:46:01.874939] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:39.421 [2024-04-18 09:46:02.608308] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:39.679 09:46:02 -- common/autotest_common.sh@641 -- # es=216 00:17:39.679 09:46:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:39.679 09:46:02 -- common/autotest_common.sh@650 -- # es=88 00:17:39.679 09:46:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:39.679 09:46:02 -- common/autotest_common.sh@658 -- # es=1 00:17:39.679 09:46:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:39.679 09:46:02 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:17:39.680 09:46:02 -- common/autotest_common.sh@638 -- # local es=0 00:17:39.680 09:46:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:17:39.680 09:46:03 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.680 09:46:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.680 09:46:03 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.680 09:46:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.680 09:46:03 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.680 09:46:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.680 09:46:03 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.680 09:46:03 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:39.680 09:46:03 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:17:39.951 [2024-04-18 09:46:03.102831] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:39.952 [2024-04-18 09:46:03.102998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64689 ] 00:17:39.952 [2024-04-18 09:46:03.273665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.213 [2024-04-18 09:46:03.507129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.472 [2024-04-18 09:46:03.812873] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:17:40.472 [2024-04-18 09:46:03.812930] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:17:40.472 [2024-04-18 09:46:03.812991] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:41.419 [2024-04-18 09:46:04.555803] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:41.677 09:46:04 -- common/autotest_common.sh@641 -- # es=216 00:17:41.677 09:46:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:41.677 09:46:04 -- common/autotest_common.sh@650 -- # es=88 00:17:41.677 09:46:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:41.677 09:46:04 -- common/autotest_common.sh@658 -- # es=1 00:17:41.677 09:46:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:41.677 09:46:04 -- dd/posix.sh@46 -- # gen_bytes 512 00:17:41.677 09:46:04 -- dd/common.sh@98 -- # xtrace_disable 00:17:41.677 09:46:04 -- common/autotest_common.sh@10 -- # set +x 00:17:41.677 09:46:04 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:41.936 [2024-04-18 09:46:05.071493] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:41.936 [2024-04-18 09:46:05.071669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64714 ] 00:17:41.936 [2024-04-18 09:46:05.246566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.195 [2024-04-18 09:46:05.498249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.831  Copying: 512/512 [B] (average 500 kBps) 00:17:43.831 00:17:43.831 09:46:06 -- dd/posix.sh@49 -- # [[ vht6o61pnpxh7jxgs6o1d49rpilmvhos9selhh5d8bgtdptvqvgm3iizeka2d468e2prq1qr703wxt5hoz8hwcla5fxap0cxodackfsj86k20hnue4esus978ffzo7t1h8e08byozoz8xn94pfekh0z32bntowu7g4j7en9zkt7xlkxvvwjqmos4y89e20ityczo011po29vi80w6h447kldsk49zlmyoetam38j3lz8m3xtxgnnpsop41lir741c6qotroahaqnfr19thjwlz6al5fu2aoly1va0kkpa0j7f43pegj1x9ir88yyf59y48z84jby0lp2rulwckoao7pyed8rq5twhnl7yt5faxlokll5vsjbgam5w772em9f7xogxsqa2ngtdewibfixs9wl0md8dqoa46olqh6aa894hdmuk76q7s379pw9y39804up0wzhbp5jrxeooae1f3hdzyp7na5vxlkwol1try8gz9zkzsoepll1qtj80xv2 == \v\h\t\6\o\6\1\p\n\p\x\h\7\j\x\g\s\6\o\1\d\4\9\r\p\i\l\m\v\h\o\s\9\s\e\l\h\h\5\d\8\b\g\t\d\p\t\v\q\v\g\m\3\i\i\z\e\k\a\2\d\4\6\8\e\2\p\r\q\1\q\r\7\0\3\w\x\t\5\h\o\z\8\h\w\c\l\a\5\f\x\a\p\0\c\x\o\d\a\c\k\f\s\j\8\6\k\2\0\h\n\u\e\4\e\s\u\s\9\7\8\f\f\z\o\7\t\1\h\8\e\0\8\b\y\o\z\o\z\8\x\n\9\4\p\f\e\k\h\0\z\3\2\b\n\t\o\w\u\7\g\4\j\7\e\n\9\z\k\t\7\x\l\k\x\v\v\w\j\q\m\o\s\4\y\8\9\e\2\0\i\t\y\c\z\o\0\1\1\p\o\2\9\v\i\8\0\w\6\h\4\4\7\k\l\d\s\k\4\9\z\l\m\y\o\e\t\a\m\3\8\j\3\l\z\8\m\3\x\t\x\g\n\n\p\s\o\p\4\1\l\i\r\7\4\1\c\6\q\o\t\r\o\a\h\a\q\n\f\r\1\9\t\h\j\w\l\z\6\a\l\5\f\u\2\a\o\l\y\1\v\a\0\k\k\p\a\0\j\7\f\4\3\p\e\g\j\1\x\9\i\r\8\8\y\y\f\5\9\y\4\8\z\8\4\j\b\y\0\l\p\2\r\u\l\w\c\k\o\a\o\7\p\y\e\d\8\r\q\5\t\w\h\n\l\7\y\t\5\f\a\x\l\o\k\l\l\5\v\s\j\b\g\a\m\5\w\7\7\2\e\m\9\f\7\x\o\g\x\s\q\a\2\n\g\t\d\e\w\i\b\f\i\x\s\9\w\l\0\m\d\8\d\q\o\a\4\6\o\l\q\h\6\a\a\8\9\4\h\d\m\u\k\7\6\q\7\s\3\7\9\p\w\9\y\3\9\8\0\4\u\p\0\w\z\h\b\p\5\j\r\x\e\o\o\a\e\1\f\3\h\d\z\y\p\7\n\a\5\v\x\l\k\w\o\l\1\t\r\y\8\g\z\9\z\k\z\s\o\e\p\l\l\1\q\t\j\8\0\x\v\2 ]] 00:17:43.831 00:17:43.831 real 0m5.958s 00:17:43.831 user 0m4.830s 00:17:43.831 sys 0m1.527s 00:17:43.831 09:46:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:43.831 09:46:06 -- common/autotest_common.sh@10 -- # set +x 00:17:43.831 ************************************ 00:17:43.831 END TEST dd_flag_nofollow 00:17:43.831 ************************************ 00:17:43.831 09:46:07 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:17:43.831 09:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:43.831 09:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.831 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:43.831 ************************************ 00:17:43.831 START TEST dd_flag_noatime 00:17:43.831 ************************************ 00:17:43.831 09:46:07 -- common/autotest_common.sh@1111 -- # noatime 00:17:43.831 09:46:07 -- dd/posix.sh@53 -- # local atime_if 00:17:43.831 09:46:07 -- dd/posix.sh@54 -- # local atime_of 00:17:43.831 09:46:07 -- dd/posix.sh@58 -- # gen_bytes 512 00:17:43.831 09:46:07 -- dd/common.sh@98 -- # xtrace_disable 00:17:43.831 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:43.831 09:46:07 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:43.831 09:46:07 -- dd/posix.sh@60 -- # atime_if=1713433565 00:17:43.831 09:46:07 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:43.831 09:46:07 -- dd/posix.sh@61 -- # atime_of=1713433566 00:17:43.831 09:46:07 -- dd/posix.sh@66 -- # sleep 1 00:17:44.825 09:46:08 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:45.084 [2024-04-18 09:46:08.227446] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:45.084 [2024-04-18 09:46:08.227662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64778 ] 00:17:45.084 [2024-04-18 09:46:08.399118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.343 [2024-04-18 09:46:08.679398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.843  Copying: 512/512 [B] (average 500 kBps) 00:17:46.843 00:17:46.843 09:46:10 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:46.843 09:46:10 -- dd/posix.sh@69 -- # (( atime_if == 1713433565 )) 00:17:46.843 09:46:10 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:46.843 09:46:10 -- dd/posix.sh@70 -- # (( atime_of == 1713433566 )) 00:17:46.843 09:46:10 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:47.102 [2024-04-18 09:46:10.263256] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:47.102 [2024-04-18 09:46:10.263432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64809 ] 00:17:47.102 [2024-04-18 09:46:10.428055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.360 [2024-04-18 09:46:10.672052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.994  Copying: 512/512 [B] (average 500 kBps) 00:17:48.994 00:17:48.994 09:46:12 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:48.994 09:46:12 -- dd/posix.sh@73 -- # (( atime_if < 1713433570 )) 00:17:48.994 00:17:48.994 real 0m5.055s 00:17:48.994 user 0m3.290s 00:17:48.994 sys 0m2.023s 00:17:48.994 ************************************ 00:17:48.994 END TEST dd_flag_noatime 00:17:48.994 ************************************ 00:17:48.994 09:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:48.994 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:48.994 09:46:12 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:17:48.994 09:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:48.994 09:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:48.994 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:48.994 ************************************ 00:17:48.994 START TEST dd_flags_misc 00:17:48.994 ************************************ 00:17:48.994 09:46:12 -- common/autotest_common.sh@1111 -- # io 00:17:48.994 09:46:12 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:17:48.994 09:46:12 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:17:48.994 09:46:12 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:17:48.994 09:46:12 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:17:48.994 09:46:12 -- dd/posix.sh@86 -- # gen_bytes 512 00:17:48.994 09:46:12 -- dd/common.sh@98 -- # xtrace_disable 00:17:48.994 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:48.994 09:46:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:48.995 09:46:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:17:48.995 [2024-04-18 09:46:12.357165] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:48.995 [2024-04-18 09:46:12.357351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64859 ] 00:17:49.253 [2024-04-18 09:46:12.531610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.511 [2024-04-18 09:46:12.772932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.148  Copying: 512/512 [B] (average 500 kBps) 00:17:51.148 00:17:51.148 09:46:14 -- dd/posix.sh@93 -- # [[ ygl3a88660lbbeqmydzrogyavisamzaihh036lst6np55oqhz8qr1fr8ic7wx4gd9hkjk882j1nigiq2chw6a6ylazjibtefphh9evapz1nr1jqfdqc7kccej39x08c2q57rk2idbkd2pal8364vuanqf64v6n364x3km8dfq6iqdoupsyjqjrr7ulo8zlq7kqcu7tgbst3qyx80dlep5mbnbnhzrf2wn9p1wvbr83sb79whwn1vk8xn1mgbw2s6k49s7ngu6e9bpih24l0slrlaq4ra78o73k5s9dhm1iq6d264roxi9gxm3df2e8wg8t7n6wutex6kaur8pn0oj1h8ouzu1h78hsgvbtq7ohemnqiqsx5c9aygvpsmoy3satjmbhkng7h5aqnfjfcdzn2ssbxv1egux91n7c6qog69vhy8a0iqqms9nyspc83nfpxtf44d8dc4s7r5ajxyijnobn2nahfd4kr5063p1bjhqemk9ekv0nuhmtc5xaqj == \y\g\l\3\a\8\8\6\6\0\l\b\b\e\q\m\y\d\z\r\o\g\y\a\v\i\s\a\m\z\a\i\h\h\0\3\6\l\s\t\6\n\p\5\5\o\q\h\z\8\q\r\1\f\r\8\i\c\7\w\x\4\g\d\9\h\k\j\k\8\8\2\j\1\n\i\g\i\q\2\c\h\w\6\a\6\y\l\a\z\j\i\b\t\e\f\p\h\h\9\e\v\a\p\z\1\n\r\1\j\q\f\d\q\c\7\k\c\c\e\j\3\9\x\0\8\c\2\q\5\7\r\k\2\i\d\b\k\d\2\p\a\l\8\3\6\4\v\u\a\n\q\f\6\4\v\6\n\3\6\4\x\3\k\m\8\d\f\q\6\i\q\d\o\u\p\s\y\j\q\j\r\r\7\u\l\o\8\z\l\q\7\k\q\c\u\7\t\g\b\s\t\3\q\y\x\8\0\d\l\e\p\5\m\b\n\b\n\h\z\r\f\2\w\n\9\p\1\w\v\b\r\8\3\s\b\7\9\w\h\w\n\1\v\k\8\x\n\1\m\g\b\w\2\s\6\k\4\9\s\7\n\g\u\6\e\9\b\p\i\h\2\4\l\0\s\l\r\l\a\q\4\r\a\7\8\o\7\3\k\5\s\9\d\h\m\1\i\q\6\d\2\6\4\r\o\x\i\9\g\x\m\3\d\f\2\e\8\w\g\8\t\7\n\6\w\u\t\e\x\6\k\a\u\r\8\p\n\0\o\j\1\h\8\o\u\z\u\1\h\7\8\h\s\g\v\b\t\q\7\o\h\e\m\n\q\i\q\s\x\5\c\9\a\y\g\v\p\s\m\o\y\3\s\a\t\j\m\b\h\k\n\g\7\h\5\a\q\n\f\j\f\c\d\z\n\2\s\s\b\x\v\1\e\g\u\x\9\1\n\7\c\6\q\o\g\6\9\v\h\y\8\a\0\i\q\q\m\s\9\n\y\s\p\c\8\3\n\f\p\x\t\f\4\4\d\8\d\c\4\s\7\r\5\a\j\x\y\i\j\n\o\b\n\2\n\a\h\f\d\4\k\r\5\0\6\3\p\1\b\j\h\q\e\m\k\9\e\k\v\0\n\u\h\m\t\c\5\x\a\q\j ]] 00:17:51.148 09:46:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:51.148 09:46:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:17:51.148 [2024-04-18 09:46:14.356957] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:51.148 [2024-04-18 09:46:14.357193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64886 ] 00:17:51.148 [2024-04-18 09:46:14.524347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.714 [2024-04-18 09:46:14.802831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.937  Copying: 512/512 [B] (average 500 kBps) 00:17:52.937 00:17:52.937 09:46:16 -- dd/posix.sh@93 -- # [[ ygl3a88660lbbeqmydzrogyavisamzaihh036lst6np55oqhz8qr1fr8ic7wx4gd9hkjk882j1nigiq2chw6a6ylazjibtefphh9evapz1nr1jqfdqc7kccej39x08c2q57rk2idbkd2pal8364vuanqf64v6n364x3km8dfq6iqdoupsyjqjrr7ulo8zlq7kqcu7tgbst3qyx80dlep5mbnbnhzrf2wn9p1wvbr83sb79whwn1vk8xn1mgbw2s6k49s7ngu6e9bpih24l0slrlaq4ra78o73k5s9dhm1iq6d264roxi9gxm3df2e8wg8t7n6wutex6kaur8pn0oj1h8ouzu1h78hsgvbtq7ohemnqiqsx5c9aygvpsmoy3satjmbhkng7h5aqnfjfcdzn2ssbxv1egux91n7c6qog69vhy8a0iqqms9nyspc83nfpxtf44d8dc4s7r5ajxyijnobn2nahfd4kr5063p1bjhqemk9ekv0nuhmtc5xaqj == \y\g\l\3\a\8\8\6\6\0\l\b\b\e\q\m\y\d\z\r\o\g\y\a\v\i\s\a\m\z\a\i\h\h\0\3\6\l\s\t\6\n\p\5\5\o\q\h\z\8\q\r\1\f\r\8\i\c\7\w\x\4\g\d\9\h\k\j\k\8\8\2\j\1\n\i\g\i\q\2\c\h\w\6\a\6\y\l\a\z\j\i\b\t\e\f\p\h\h\9\e\v\a\p\z\1\n\r\1\j\q\f\d\q\c\7\k\c\c\e\j\3\9\x\0\8\c\2\q\5\7\r\k\2\i\d\b\k\d\2\p\a\l\8\3\6\4\v\u\a\n\q\f\6\4\v\6\n\3\6\4\x\3\k\m\8\d\f\q\6\i\q\d\o\u\p\s\y\j\q\j\r\r\7\u\l\o\8\z\l\q\7\k\q\c\u\7\t\g\b\s\t\3\q\y\x\8\0\d\l\e\p\5\m\b\n\b\n\h\z\r\f\2\w\n\9\p\1\w\v\b\r\8\3\s\b\7\9\w\h\w\n\1\v\k\8\x\n\1\m\g\b\w\2\s\6\k\4\9\s\7\n\g\u\6\e\9\b\p\i\h\2\4\l\0\s\l\r\l\a\q\4\r\a\7\8\o\7\3\k\5\s\9\d\h\m\1\i\q\6\d\2\6\4\r\o\x\i\9\g\x\m\3\d\f\2\e\8\w\g\8\t\7\n\6\w\u\t\e\x\6\k\a\u\r\8\p\n\0\o\j\1\h\8\o\u\z\u\1\h\7\8\h\s\g\v\b\t\q\7\o\h\e\m\n\q\i\q\s\x\5\c\9\a\y\g\v\p\s\m\o\y\3\s\a\t\j\m\b\h\k\n\g\7\h\5\a\q\n\f\j\f\c\d\z\n\2\s\s\b\x\v\1\e\g\u\x\9\1\n\7\c\6\q\o\g\6\9\v\h\y\8\a\0\i\q\q\m\s\9\n\y\s\p\c\8\3\n\f\p\x\t\f\4\4\d\8\d\c\4\s\7\r\5\a\j\x\y\i\j\n\o\b\n\2\n\a\h\f\d\4\k\r\5\0\6\3\p\1\b\j\h\q\e\m\k\9\e\k\v\0\n\u\h\m\t\c\5\x\a\q\j ]] 00:17:52.937 09:46:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:52.937 09:46:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:17:53.195 [2024-04-18 09:46:16.380318] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:53.195 [2024-04-18 09:46:16.380504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64912 ] 00:17:53.195 [2024-04-18 09:46:16.553731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.454 [2024-04-18 09:46:16.839338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.955  Copying: 512/512 [B] (average 125 kBps) 00:17:54.955 00:17:55.213 09:46:18 -- dd/posix.sh@93 -- # [[ ygl3a88660lbbeqmydzrogyavisamzaihh036lst6np55oqhz8qr1fr8ic7wx4gd9hkjk882j1nigiq2chw6a6ylazjibtefphh9evapz1nr1jqfdqc7kccej39x08c2q57rk2idbkd2pal8364vuanqf64v6n364x3km8dfq6iqdoupsyjqjrr7ulo8zlq7kqcu7tgbst3qyx80dlep5mbnbnhzrf2wn9p1wvbr83sb79whwn1vk8xn1mgbw2s6k49s7ngu6e9bpih24l0slrlaq4ra78o73k5s9dhm1iq6d264roxi9gxm3df2e8wg8t7n6wutex6kaur8pn0oj1h8ouzu1h78hsgvbtq7ohemnqiqsx5c9aygvpsmoy3satjmbhkng7h5aqnfjfcdzn2ssbxv1egux91n7c6qog69vhy8a0iqqms9nyspc83nfpxtf44d8dc4s7r5ajxyijnobn2nahfd4kr5063p1bjhqemk9ekv0nuhmtc5xaqj == \y\g\l\3\a\8\8\6\6\0\l\b\b\e\q\m\y\d\z\r\o\g\y\a\v\i\s\a\m\z\a\i\h\h\0\3\6\l\s\t\6\n\p\5\5\o\q\h\z\8\q\r\1\f\r\8\i\c\7\w\x\4\g\d\9\h\k\j\k\8\8\2\j\1\n\i\g\i\q\2\c\h\w\6\a\6\y\l\a\z\j\i\b\t\e\f\p\h\h\9\e\v\a\p\z\1\n\r\1\j\q\f\d\q\c\7\k\c\c\e\j\3\9\x\0\8\c\2\q\5\7\r\k\2\i\d\b\k\d\2\p\a\l\8\3\6\4\v\u\a\n\q\f\6\4\v\6\n\3\6\4\x\3\k\m\8\d\f\q\6\i\q\d\o\u\p\s\y\j\q\j\r\r\7\u\l\o\8\z\l\q\7\k\q\c\u\7\t\g\b\s\t\3\q\y\x\8\0\d\l\e\p\5\m\b\n\b\n\h\z\r\f\2\w\n\9\p\1\w\v\b\r\8\3\s\b\7\9\w\h\w\n\1\v\k\8\x\n\1\m\g\b\w\2\s\6\k\4\9\s\7\n\g\u\6\e\9\b\p\i\h\2\4\l\0\s\l\r\l\a\q\4\r\a\7\8\o\7\3\k\5\s\9\d\h\m\1\i\q\6\d\2\6\4\r\o\x\i\9\g\x\m\3\d\f\2\e\8\w\g\8\t\7\n\6\w\u\t\e\x\6\k\a\u\r\8\p\n\0\o\j\1\h\8\o\u\z\u\1\h\7\8\h\s\g\v\b\t\q\7\o\h\e\m\n\q\i\q\s\x\5\c\9\a\y\g\v\p\s\m\o\y\3\s\a\t\j\m\b\h\k\n\g\7\h\5\a\q\n\f\j\f\c\d\z\n\2\s\s\b\x\v\1\e\g\u\x\9\1\n\7\c\6\q\o\g\6\9\v\h\y\8\a\0\i\q\q\m\s\9\n\y\s\p\c\8\3\n\f\p\x\t\f\4\4\d\8\d\c\4\s\7\r\5\a\j\x\y\i\j\n\o\b\n\2\n\a\h\f\d\4\k\r\5\0\6\3\p\1\b\j\h\q\e\m\k\9\e\k\v\0\n\u\h\m\t\c\5\x\a\q\j ]] 00:17:55.213 09:46:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:55.213 09:46:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:17:55.214 [2024-04-18 09:46:18.449442] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:55.214 [2024-04-18 09:46:18.449621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64935 ] 00:17:55.472 [2024-04-18 09:46:18.622952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.472 [2024-04-18 09:46:18.863478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.976  Copying: 512/512 [B] (average 166 kBps) 00:17:56.976 00:17:56.976 09:46:20 -- dd/posix.sh@93 -- # [[ ygl3a88660lbbeqmydzrogyavisamzaihh036lst6np55oqhz8qr1fr8ic7wx4gd9hkjk882j1nigiq2chw6a6ylazjibtefphh9evapz1nr1jqfdqc7kccej39x08c2q57rk2idbkd2pal8364vuanqf64v6n364x3km8dfq6iqdoupsyjqjrr7ulo8zlq7kqcu7tgbst3qyx80dlep5mbnbnhzrf2wn9p1wvbr83sb79whwn1vk8xn1mgbw2s6k49s7ngu6e9bpih24l0slrlaq4ra78o73k5s9dhm1iq6d264roxi9gxm3df2e8wg8t7n6wutex6kaur8pn0oj1h8ouzu1h78hsgvbtq7ohemnqiqsx5c9aygvpsmoy3satjmbhkng7h5aqnfjfcdzn2ssbxv1egux91n7c6qog69vhy8a0iqqms9nyspc83nfpxtf44d8dc4s7r5ajxyijnobn2nahfd4kr5063p1bjhqemk9ekv0nuhmtc5xaqj == \y\g\l\3\a\8\8\6\6\0\l\b\b\e\q\m\y\d\z\r\o\g\y\a\v\i\s\a\m\z\a\i\h\h\0\3\6\l\s\t\6\n\p\5\5\o\q\h\z\8\q\r\1\f\r\8\i\c\7\w\x\4\g\d\9\h\k\j\k\8\8\2\j\1\n\i\g\i\q\2\c\h\w\6\a\6\y\l\a\z\j\i\b\t\e\f\p\h\h\9\e\v\a\p\z\1\n\r\1\j\q\f\d\q\c\7\k\c\c\e\j\3\9\x\0\8\c\2\q\5\7\r\k\2\i\d\b\k\d\2\p\a\l\8\3\6\4\v\u\a\n\q\f\6\4\v\6\n\3\6\4\x\3\k\m\8\d\f\q\6\i\q\d\o\u\p\s\y\j\q\j\r\r\7\u\l\o\8\z\l\q\7\k\q\c\u\7\t\g\b\s\t\3\q\y\x\8\0\d\l\e\p\5\m\b\n\b\n\h\z\r\f\2\w\n\9\p\1\w\v\b\r\8\3\s\b\7\9\w\h\w\n\1\v\k\8\x\n\1\m\g\b\w\2\s\6\k\4\9\s\7\n\g\u\6\e\9\b\p\i\h\2\4\l\0\s\l\r\l\a\q\4\r\a\7\8\o\7\3\k\5\s\9\d\h\m\1\i\q\6\d\2\6\4\r\o\x\i\9\g\x\m\3\d\f\2\e\8\w\g\8\t\7\n\6\w\u\t\e\x\6\k\a\u\r\8\p\n\0\o\j\1\h\8\o\u\z\u\1\h\7\8\h\s\g\v\b\t\q\7\o\h\e\m\n\q\i\q\s\x\5\c\9\a\y\g\v\p\s\m\o\y\3\s\a\t\j\m\b\h\k\n\g\7\h\5\a\q\n\f\j\f\c\d\z\n\2\s\s\b\x\v\1\e\g\u\x\9\1\n\7\c\6\q\o\g\6\9\v\h\y\8\a\0\i\q\q\m\s\9\n\y\s\p\c\8\3\n\f\p\x\t\f\4\4\d\8\d\c\4\s\7\r\5\a\j\x\y\i\j\n\o\b\n\2\n\a\h\f\d\4\k\r\5\0\6\3\p\1\b\j\h\q\e\m\k\9\e\k\v\0\n\u\h\m\t\c\5\x\a\q\j ]] 00:17:56.976 09:46:20 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:17:56.976 09:46:20 -- dd/posix.sh@86 -- # gen_bytes 512 00:17:56.976 09:46:20 -- dd/common.sh@98 -- # xtrace_disable 00:17:56.976 09:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:56.976 09:46:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:56.976 09:46:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:17:57.234 [2024-04-18 09:46:20.443521] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:57.234 [2024-04-18 09:46:20.443700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64962 ] 00:17:57.234 [2024-04-18 09:46:20.620907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.799 [2024-04-18 09:46:20.892760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.993  Copying: 512/512 [B] (average 500 kBps) 00:17:58.993 00:17:58.993 09:46:22 -- dd/posix.sh@93 -- # [[ 5uz2t841zl6403gz77mr3j17o7bfurgx7z6krrrqa129kp38x7m6ru8ilycyplvejhto9oiczq336jps9ho2ow4xfka7iqdcdp95g0jsxkpoh7f7ur3swjv4jeqiq65lt3ih41nq8flzcyej69m3q492zz3h80o4ocmk7qm6un1j2hnj0fu6ypp2o6iu1xbsn8ndwc320rdw6gwcuhm0ctcc4y45vgnudxt1ztcydxcqkifuty31avxm1suckjqs0w8c0o8gl8l8kmhzfmyzrudq7ujlxh9qu7vzckcysi8ij2il2xa12ya3huuek7rvbaqxjdzehzu5hb7ftes12o5aa1hje5m7q6v0sj2eqyb87pwnmhhydssbft5wvoh4rss55xu0600nxnm4j1y46nfzdk4og4ip5u90m30dg20otnjpgvp92mbqcqla4rky61oqep9ye35dyybjatogmyqregfna4uy6pv9a81hev4d0vxmhdepjczqburht5lj == \5\u\z\2\t\8\4\1\z\l\6\4\0\3\g\z\7\7\m\r\3\j\1\7\o\7\b\f\u\r\g\x\7\z\6\k\r\r\r\q\a\1\2\9\k\p\3\8\x\7\m\6\r\u\8\i\l\y\c\y\p\l\v\e\j\h\t\o\9\o\i\c\z\q\3\3\6\j\p\s\9\h\o\2\o\w\4\x\f\k\a\7\i\q\d\c\d\p\9\5\g\0\j\s\x\k\p\o\h\7\f\7\u\r\3\s\w\j\v\4\j\e\q\i\q\6\5\l\t\3\i\h\4\1\n\q\8\f\l\z\c\y\e\j\6\9\m\3\q\4\9\2\z\z\3\h\8\0\o\4\o\c\m\k\7\q\m\6\u\n\1\j\2\h\n\j\0\f\u\6\y\p\p\2\o\6\i\u\1\x\b\s\n\8\n\d\w\c\3\2\0\r\d\w\6\g\w\c\u\h\m\0\c\t\c\c\4\y\4\5\v\g\n\u\d\x\t\1\z\t\c\y\d\x\c\q\k\i\f\u\t\y\3\1\a\v\x\m\1\s\u\c\k\j\q\s\0\w\8\c\0\o\8\g\l\8\l\8\k\m\h\z\f\m\y\z\r\u\d\q\7\u\j\l\x\h\9\q\u\7\v\z\c\k\c\y\s\i\8\i\j\2\i\l\2\x\a\1\2\y\a\3\h\u\u\e\k\7\r\v\b\a\q\x\j\d\z\e\h\z\u\5\h\b\7\f\t\e\s\1\2\o\5\a\a\1\h\j\e\5\m\7\q\6\v\0\s\j\2\e\q\y\b\8\7\p\w\n\m\h\h\y\d\s\s\b\f\t\5\w\v\o\h\4\r\s\s\5\5\x\u\0\6\0\0\n\x\n\m\4\j\1\y\4\6\n\f\z\d\k\4\o\g\4\i\p\5\u\9\0\m\3\0\d\g\2\0\o\t\n\j\p\g\v\p\9\2\m\b\q\c\q\l\a\4\r\k\y\6\1\o\q\e\p\9\y\e\3\5\d\y\y\b\j\a\t\o\g\m\y\q\r\e\g\f\n\a\4\u\y\6\p\v\9\a\8\1\h\e\v\4\d\0\v\x\m\h\d\e\p\j\c\z\q\b\u\r\h\t\5\l\j ]] 00:17:58.993 09:46:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:58.993 09:46:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:17:59.251 [2024-04-18 09:46:22.454909] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:59.251 [2024-04-18 09:46:22.455114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64989 ] 00:17:59.251 [2024-04-18 09:46:22.629441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.509 [2024-04-18 09:46:22.870383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.010  Copying: 512/512 [B] (average 500 kBps) 00:18:01.010 00:18:01.010 09:46:24 -- dd/posix.sh@93 -- # [[ 5uz2t841zl6403gz77mr3j17o7bfurgx7z6krrrqa129kp38x7m6ru8ilycyplvejhto9oiczq336jps9ho2ow4xfka7iqdcdp95g0jsxkpoh7f7ur3swjv4jeqiq65lt3ih41nq8flzcyej69m3q492zz3h80o4ocmk7qm6un1j2hnj0fu6ypp2o6iu1xbsn8ndwc320rdw6gwcuhm0ctcc4y45vgnudxt1ztcydxcqkifuty31avxm1suckjqs0w8c0o8gl8l8kmhzfmyzrudq7ujlxh9qu7vzckcysi8ij2il2xa12ya3huuek7rvbaqxjdzehzu5hb7ftes12o5aa1hje5m7q6v0sj2eqyb87pwnmhhydssbft5wvoh4rss55xu0600nxnm4j1y46nfzdk4og4ip5u90m30dg20otnjpgvp92mbqcqla4rky61oqep9ye35dyybjatogmyqregfna4uy6pv9a81hev4d0vxmhdepjczqburht5lj == \5\u\z\2\t\8\4\1\z\l\6\4\0\3\g\z\7\7\m\r\3\j\1\7\o\7\b\f\u\r\g\x\7\z\6\k\r\r\r\q\a\1\2\9\k\p\3\8\x\7\m\6\r\u\8\i\l\y\c\y\p\l\v\e\j\h\t\o\9\o\i\c\z\q\3\3\6\j\p\s\9\h\o\2\o\w\4\x\f\k\a\7\i\q\d\c\d\p\9\5\g\0\j\s\x\k\p\o\h\7\f\7\u\r\3\s\w\j\v\4\j\e\q\i\q\6\5\l\t\3\i\h\4\1\n\q\8\f\l\z\c\y\e\j\6\9\m\3\q\4\9\2\z\z\3\h\8\0\o\4\o\c\m\k\7\q\m\6\u\n\1\j\2\h\n\j\0\f\u\6\y\p\p\2\o\6\i\u\1\x\b\s\n\8\n\d\w\c\3\2\0\r\d\w\6\g\w\c\u\h\m\0\c\t\c\c\4\y\4\5\v\g\n\u\d\x\t\1\z\t\c\y\d\x\c\q\k\i\f\u\t\y\3\1\a\v\x\m\1\s\u\c\k\j\q\s\0\w\8\c\0\o\8\g\l\8\l\8\k\m\h\z\f\m\y\z\r\u\d\q\7\u\j\l\x\h\9\q\u\7\v\z\c\k\c\y\s\i\8\i\j\2\i\l\2\x\a\1\2\y\a\3\h\u\u\e\k\7\r\v\b\a\q\x\j\d\z\e\h\z\u\5\h\b\7\f\t\e\s\1\2\o\5\a\a\1\h\j\e\5\m\7\q\6\v\0\s\j\2\e\q\y\b\8\7\p\w\n\m\h\h\y\d\s\s\b\f\t\5\w\v\o\h\4\r\s\s\5\5\x\u\0\6\0\0\n\x\n\m\4\j\1\y\4\6\n\f\z\d\k\4\o\g\4\i\p\5\u\9\0\m\3\0\d\g\2\0\o\t\n\j\p\g\v\p\9\2\m\b\q\c\q\l\a\4\r\k\y\6\1\o\q\e\p\9\y\e\3\5\d\y\y\b\j\a\t\o\g\m\y\q\r\e\g\f\n\a\4\u\y\6\p\v\9\a\8\1\h\e\v\4\d\0\v\x\m\h\d\e\p\j\c\z\q\b\u\r\h\t\5\l\j ]] 00:18:01.010 09:46:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:01.010 09:46:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:01.268 [2024-04-18 09:46:24.433018] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:01.268 [2024-04-18 09:46:24.433205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65016 ] 00:18:01.268 [2024-04-18 09:46:24.598178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.526 [2024-04-18 09:46:24.834567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.161  Copying: 512/512 [B] (average 166 kBps) 00:18:03.161 00:18:03.161 09:46:26 -- dd/posix.sh@93 -- # [[ 5uz2t841zl6403gz77mr3j17o7bfurgx7z6krrrqa129kp38x7m6ru8ilycyplvejhto9oiczq336jps9ho2ow4xfka7iqdcdp95g0jsxkpoh7f7ur3swjv4jeqiq65lt3ih41nq8flzcyej69m3q492zz3h80o4ocmk7qm6un1j2hnj0fu6ypp2o6iu1xbsn8ndwc320rdw6gwcuhm0ctcc4y45vgnudxt1ztcydxcqkifuty31avxm1suckjqs0w8c0o8gl8l8kmhzfmyzrudq7ujlxh9qu7vzckcysi8ij2il2xa12ya3huuek7rvbaqxjdzehzu5hb7ftes12o5aa1hje5m7q6v0sj2eqyb87pwnmhhydssbft5wvoh4rss55xu0600nxnm4j1y46nfzdk4og4ip5u90m30dg20otnjpgvp92mbqcqla4rky61oqep9ye35dyybjatogmyqregfna4uy6pv9a81hev4d0vxmhdepjczqburht5lj == \5\u\z\2\t\8\4\1\z\l\6\4\0\3\g\z\7\7\m\r\3\j\1\7\o\7\b\f\u\r\g\x\7\z\6\k\r\r\r\q\a\1\2\9\k\p\3\8\x\7\m\6\r\u\8\i\l\y\c\y\p\l\v\e\j\h\t\o\9\o\i\c\z\q\3\3\6\j\p\s\9\h\o\2\o\w\4\x\f\k\a\7\i\q\d\c\d\p\9\5\g\0\j\s\x\k\p\o\h\7\f\7\u\r\3\s\w\j\v\4\j\e\q\i\q\6\5\l\t\3\i\h\4\1\n\q\8\f\l\z\c\y\e\j\6\9\m\3\q\4\9\2\z\z\3\h\8\0\o\4\o\c\m\k\7\q\m\6\u\n\1\j\2\h\n\j\0\f\u\6\y\p\p\2\o\6\i\u\1\x\b\s\n\8\n\d\w\c\3\2\0\r\d\w\6\g\w\c\u\h\m\0\c\t\c\c\4\y\4\5\v\g\n\u\d\x\t\1\z\t\c\y\d\x\c\q\k\i\f\u\t\y\3\1\a\v\x\m\1\s\u\c\k\j\q\s\0\w\8\c\0\o\8\g\l\8\l\8\k\m\h\z\f\m\y\z\r\u\d\q\7\u\j\l\x\h\9\q\u\7\v\z\c\k\c\y\s\i\8\i\j\2\i\l\2\x\a\1\2\y\a\3\h\u\u\e\k\7\r\v\b\a\q\x\j\d\z\e\h\z\u\5\h\b\7\f\t\e\s\1\2\o\5\a\a\1\h\j\e\5\m\7\q\6\v\0\s\j\2\e\q\y\b\8\7\p\w\n\m\h\h\y\d\s\s\b\f\t\5\w\v\o\h\4\r\s\s\5\5\x\u\0\6\0\0\n\x\n\m\4\j\1\y\4\6\n\f\z\d\k\4\o\g\4\i\p\5\u\9\0\m\3\0\d\g\2\0\o\t\n\j\p\g\v\p\9\2\m\b\q\c\q\l\a\4\r\k\y\6\1\o\q\e\p\9\y\e\3\5\d\y\y\b\j\a\t\o\g\m\y\q\r\e\g\f\n\a\4\u\y\6\p\v\9\a\8\1\h\e\v\4\d\0\v\x\m\h\d\e\p\j\c\z\q\b\u\r\h\t\5\l\j ]] 00:18:03.161 09:46:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:03.161 09:46:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:03.161 [2024-04-18 09:46:26.382825] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:03.161 [2024-04-18 09:46:26.382996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65043 ] 00:18:03.420 [2024-04-18 09:46:26.558594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.420 [2024-04-18 09:46:26.800988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.922  Copying: 512/512 [B] (average 166 kBps) 00:18:04.922 00:18:04.922 09:46:28 -- dd/posix.sh@93 -- # [[ 5uz2t841zl6403gz77mr3j17o7bfurgx7z6krrrqa129kp38x7m6ru8ilycyplvejhto9oiczq336jps9ho2ow4xfka7iqdcdp95g0jsxkpoh7f7ur3swjv4jeqiq65lt3ih41nq8flzcyej69m3q492zz3h80o4ocmk7qm6un1j2hnj0fu6ypp2o6iu1xbsn8ndwc320rdw6gwcuhm0ctcc4y45vgnudxt1ztcydxcqkifuty31avxm1suckjqs0w8c0o8gl8l8kmhzfmyzrudq7ujlxh9qu7vzckcysi8ij2il2xa12ya3huuek7rvbaqxjdzehzu5hb7ftes12o5aa1hje5m7q6v0sj2eqyb87pwnmhhydssbft5wvoh4rss55xu0600nxnm4j1y46nfzdk4og4ip5u90m30dg20otnjpgvp92mbqcqla4rky61oqep9ye35dyybjatogmyqregfna4uy6pv9a81hev4d0vxmhdepjczqburht5lj == \5\u\z\2\t\8\4\1\z\l\6\4\0\3\g\z\7\7\m\r\3\j\1\7\o\7\b\f\u\r\g\x\7\z\6\k\r\r\r\q\a\1\2\9\k\p\3\8\x\7\m\6\r\u\8\i\l\y\c\y\p\l\v\e\j\h\t\o\9\o\i\c\z\q\3\3\6\j\p\s\9\h\o\2\o\w\4\x\f\k\a\7\i\q\d\c\d\p\9\5\g\0\j\s\x\k\p\o\h\7\f\7\u\r\3\s\w\j\v\4\j\e\q\i\q\6\5\l\t\3\i\h\4\1\n\q\8\f\l\z\c\y\e\j\6\9\m\3\q\4\9\2\z\z\3\h\8\0\o\4\o\c\m\k\7\q\m\6\u\n\1\j\2\h\n\j\0\f\u\6\y\p\p\2\o\6\i\u\1\x\b\s\n\8\n\d\w\c\3\2\0\r\d\w\6\g\w\c\u\h\m\0\c\t\c\c\4\y\4\5\v\g\n\u\d\x\t\1\z\t\c\y\d\x\c\q\k\i\f\u\t\y\3\1\a\v\x\m\1\s\u\c\k\j\q\s\0\w\8\c\0\o\8\g\l\8\l\8\k\m\h\z\f\m\y\z\r\u\d\q\7\u\j\l\x\h\9\q\u\7\v\z\c\k\c\y\s\i\8\i\j\2\i\l\2\x\a\1\2\y\a\3\h\u\u\e\k\7\r\v\b\a\q\x\j\d\z\e\h\z\u\5\h\b\7\f\t\e\s\1\2\o\5\a\a\1\h\j\e\5\m\7\q\6\v\0\s\j\2\e\q\y\b\8\7\p\w\n\m\h\h\y\d\s\s\b\f\t\5\w\v\o\h\4\r\s\s\5\5\x\u\0\6\0\0\n\x\n\m\4\j\1\y\4\6\n\f\z\d\k\4\o\g\4\i\p\5\u\9\0\m\3\0\d\g\2\0\o\t\n\j\p\g\v\p\9\2\m\b\q\c\q\l\a\4\r\k\y\6\1\o\q\e\p\9\y\e\3\5\d\y\y\b\j\a\t\o\g\m\y\q\r\e\g\f\n\a\4\u\y\6\p\v\9\a\8\1\h\e\v\4\d\0\v\x\m\h\d\e\p\j\c\z\q\b\u\r\h\t\5\l\j ]] 00:18:04.922 00:18:04.922 real 0m16.027s 00:18:04.922 user 0m12.991s 00:18:04.922 sys 0m8.081s 00:18:04.922 ************************************ 00:18:04.922 END TEST dd_flags_misc 00:18:04.922 ************************************ 00:18:04.922 09:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.922 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 09:46:28 -- dd/posix.sh@131 -- # tests_forced_aio 00:18:05.180 09:46:28 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:18:05.180 * Second test run, disabling liburing, forcing AIO 00:18:05.180 09:46:28 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:18:05.180 09:46:28 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:18:05.180 09:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:05.180 09:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.180 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 ************************************ 00:18:05.180 START TEST dd_flag_append_forced_aio 00:18:05.180 ************************************ 00:18:05.180 09:46:28 -- common/autotest_common.sh@1111 -- # append 00:18:05.180 09:46:28 -- dd/posix.sh@16 -- # local dump0 00:18:05.180 09:46:28 -- dd/posix.sh@17 -- # local dump1 00:18:05.180 09:46:28 -- dd/posix.sh@19 -- # gen_bytes 32 00:18:05.180 09:46:28 -- dd/common.sh@98 -- # xtrace_disable 00:18:05.180 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 09:46:28 -- dd/posix.sh@19 -- # dump0=84qz2tfxu7zsobohykbhvk406abeikrb 00:18:05.180 09:46:28 -- dd/posix.sh@20 -- # gen_bytes 32 00:18:05.180 09:46:28 -- dd/common.sh@98 -- # xtrace_disable 00:18:05.180 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 09:46:28 -- dd/posix.sh@20 -- # dump1=37bp33p4gge3muhd0vg8y9yciraw6w2s 00:18:05.180 09:46:28 -- dd/posix.sh@22 -- # printf %s 84qz2tfxu7zsobohykbhvk406abeikrb 00:18:05.180 09:46:28 -- dd/posix.sh@23 -- # printf %s 37bp33p4gge3muhd0vg8y9yciraw6w2s 00:18:05.181 09:46:28 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:05.181 [2024-04-18 09:46:28.505065] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:05.181 [2024-04-18 09:46:28.505284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65093 ] 00:18:05.439 [2024-04-18 09:46:28.680351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.697 [2024-04-18 09:46:28.915361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.334  Copying: 32/32 [B] (average 31 kBps) 00:18:07.334 00:18:07.334 09:46:30 -- dd/posix.sh@27 -- # [[ 37bp33p4gge3muhd0vg8y9yciraw6w2s84qz2tfxu7zsobohykbhvk406abeikrb == \3\7\b\p\3\3\p\4\g\g\e\3\m\u\h\d\0\v\g\8\y\9\y\c\i\r\a\w\6\w\2\s\8\4\q\z\2\t\f\x\u\7\z\s\o\b\o\h\y\k\b\h\v\k\4\0\6\a\b\e\i\k\r\b ]] 00:18:07.334 00:18:07.334 real 0m2.100s 00:18:07.334 user 0m1.697s 00:18:07.334 sys 0m0.277s 00:18:07.334 09:46:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.334 09:46:30 -- common/autotest_common.sh@10 -- # set +x 00:18:07.334 ************************************ 00:18:07.334 END TEST dd_flag_append_forced_aio 00:18:07.334 ************************************ 00:18:07.334 09:46:30 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:18:07.334 09:46:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:07.334 09:46:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.334 09:46:30 -- common/autotest_common.sh@10 -- # set +x 00:18:07.334 ************************************ 00:18:07.334 START TEST dd_flag_directory_forced_aio 00:18:07.334 ************************************ 00:18:07.334 09:46:30 -- common/autotest_common.sh@1111 -- # directory 00:18:07.334 09:46:30 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:07.334 09:46:30 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.334 09:46:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:07.334 09:46:30 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.334 09:46:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.334 09:46:30 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.334 09:46:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.334 09:46:30 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.334 09:46:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.334 09:46:30 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.334 09:46:30 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:07.334 09:46:30 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:07.593 [2024-04-18 09:46:30.731281] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:07.593 [2024-04-18 09:46:30.731462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65141 ] 00:18:07.593 [2024-04-18 09:46:30.909418] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.853 [2024-04-18 09:46:31.154374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.111 [2024-04-18 09:46:31.465114] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:08.112 [2024-04-18 09:46:31.465236] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:08.112 [2024-04-18 09:46:31.465266] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:09.048 [2024-04-18 09:46:32.203395] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:09.307 09:46:32 -- common/autotest_common.sh@641 -- # es=236 00:18:09.307 09:46:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:09.307 09:46:32 -- common/autotest_common.sh@650 -- # es=108 00:18:09.307 09:46:32 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:09.307 09:46:32 -- common/autotest_common.sh@658 -- # es=1 00:18:09.307 09:46:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:09.307 09:46:32 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:09.307 09:46:32 -- common/autotest_common.sh@638 -- # local es=0 00:18:09.307 09:46:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:09.308 09:46:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.308 09:46:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.308 09:46:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.308 09:46:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.308 09:46:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.308 09:46:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.308 09:46:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.308 09:46:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:09.308 09:46:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:09.566 [2024-04-18 09:46:32.770196] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:09.566 [2024-04-18 09:46:32.770371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65163 ] 00:18:09.566 [2024-04-18 09:46:32.933960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.825 [2024-04-18 09:46:33.187238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.392 [2024-04-18 09:46:33.503173] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:10.392 [2024-04-18 09:46:33.503248] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:10.392 [2024-04-18 09:46:33.503291] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:10.961 [2024-04-18 09:46:34.277755] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:11.529 ************************************ 00:18:11.529 END TEST dd_flag_directory_forced_aio 00:18:11.529 ************************************ 00:18:11.529 09:46:34 -- common/autotest_common.sh@641 -- # es=236 00:18:11.529 09:46:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:11.529 09:46:34 -- common/autotest_common.sh@650 -- # es=108 00:18:11.529 09:46:34 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:11.529 09:46:34 -- common/autotest_common.sh@658 -- # es=1 00:18:11.529 09:46:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:11.529 00:18:11.529 real 0m4.065s 00:18:11.529 user 0m3.285s 00:18:11.529 sys 0m0.552s 00:18:11.529 09:46:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:11.529 09:46:34 -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 09:46:34 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:18:11.529 09:46:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:11.529 09:46:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.529 09:46:34 -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 ************************************ 00:18:11.529 START TEST dd_flag_nofollow_forced_aio 00:18:11.529 ************************************ 00:18:11.529 09:46:34 -- common/autotest_common.sh@1111 -- # nofollow 00:18:11.529 09:46:34 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:11.529 09:46:34 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:11.529 09:46:34 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:11.530 09:46:34 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:11.530 09:46:34 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:11.530 09:46:34 -- common/autotest_common.sh@638 -- # local es=0 00:18:11.530 09:46:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:11.530 09:46:34 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.530 09:46:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:11.530 09:46:34 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.530 09:46:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:11.530 09:46:34 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.530 09:46:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:11.530 09:46:34 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.530 09:46:34 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:11.530 09:46:34 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:11.788 [2024-04-18 09:46:34.928812] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:11.788 [2024-04-18 09:46:34.928991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65213 ] 00:18:11.788 [2024-04-18 09:46:35.103041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.047 [2024-04-18 09:46:35.347685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.307 [2024-04-18 09:46:35.658410] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:12.307 [2024-04-18 09:46:35.658480] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:12.307 [2024-04-18 09:46:35.658508] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:13.244 [2024-04-18 09:46:36.405485] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:13.503 09:46:36 -- common/autotest_common.sh@641 -- # es=216 00:18:13.503 09:46:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:13.503 09:46:36 -- common/autotest_common.sh@650 -- # es=88 00:18:13.503 09:46:36 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:13.503 09:46:36 -- common/autotest_common.sh@658 -- # es=1 00:18:13.503 09:46:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:13.503 09:46:36 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:13.503 09:46:36 -- common/autotest_common.sh@638 -- # local es=0 00:18:13.503 09:46:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:13.503 09:46:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.503 09:46:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.503 09:46:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.503 09:46:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.503 09:46:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.503 09:46:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.503 09:46:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.503 09:46:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:13.503 09:46:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:13.762 [2024-04-18 09:46:36.920327] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:13.762 [2024-04-18 09:46:36.920777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65240 ] 00:18:13.762 [2024-04-18 09:46:37.094250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.020 [2024-04-18 09:46:37.347871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.279 [2024-04-18 09:46:37.659856] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:14.279 [2024-04-18 09:46:37.659933] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:14.279 [2024-04-18 09:46:37.659985] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:15.216 [2024-04-18 09:46:38.411708] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:15.475 09:46:38 -- common/autotest_common.sh@641 -- # es=216 00:18:15.475 09:46:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:15.475 09:46:38 -- common/autotest_common.sh@650 -- # es=88 00:18:15.475 09:46:38 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:15.475 09:46:38 -- common/autotest_common.sh@658 -- # es=1 00:18:15.475 09:46:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:15.475 09:46:38 -- dd/posix.sh@46 -- # gen_bytes 512 00:18:15.475 09:46:38 -- dd/common.sh@98 -- # xtrace_disable 00:18:15.475 09:46:38 -- common/autotest_common.sh@10 -- # set +x 00:18:15.475 09:46:38 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:15.736 [2024-04-18 09:46:38.949945] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:15.736 [2024-04-18 09:46:38.950437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65265 ] 00:18:15.736 [2024-04-18 09:46:39.124737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.998 [2024-04-18 09:46:39.362400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.498  Copying: 512/512 [B] (average 500 kBps) 00:18:17.498 00:18:17.499 09:46:40 -- dd/posix.sh@49 -- # [[ cregdl8xdcatu4bnfsi0kxnd1nx2epxju76pidohdcsv27nplos957b56cpwm97hfopyv57cll99evjtww9x1d411cl58kx8vni24odtduqxy336m8nppbxvbvb0cuigp4untxrw4h8u3v6siwe06cfi5qg2wx20tvpvwgza2ywmunhxyxjbwf0h1xit1jqdo5tm0g5kdgfkq2v3w4a1d50zfa20nxa97u3gtj7k3mq8ohna3b49mr4t8vo449jpf488u0ou6bhgwcsrwr4je144cu8yzjpmrxi2hzpo0q9d4zb89004r1gjppx8rjcsecj0db4bolieahvxh2ydtbz1wkl2ukvur22dwel7zrmpn46qigjs44hwaw0aoq573y7w601eq4fiwx7gzz034pdg6b8vkmtdagn0q11q8y1ti8f036dq4gb4vuzag0ku91owxuct566pq1r9w5cink8phvygrgciwdeoedj2l57fagmyx07wvex6oq2cxov9 == \c\r\e\g\d\l\8\x\d\c\a\t\u\4\b\n\f\s\i\0\k\x\n\d\1\n\x\2\e\p\x\j\u\7\6\p\i\d\o\h\d\c\s\v\2\7\n\p\l\o\s\9\5\7\b\5\6\c\p\w\m\9\7\h\f\o\p\y\v\5\7\c\l\l\9\9\e\v\j\t\w\w\9\x\1\d\4\1\1\c\l\5\8\k\x\8\v\n\i\2\4\o\d\t\d\u\q\x\y\3\3\6\m\8\n\p\p\b\x\v\b\v\b\0\c\u\i\g\p\4\u\n\t\x\r\w\4\h\8\u\3\v\6\s\i\w\e\0\6\c\f\i\5\q\g\2\w\x\2\0\t\v\p\v\w\g\z\a\2\y\w\m\u\n\h\x\y\x\j\b\w\f\0\h\1\x\i\t\1\j\q\d\o\5\t\m\0\g\5\k\d\g\f\k\q\2\v\3\w\4\a\1\d\5\0\z\f\a\2\0\n\x\a\9\7\u\3\g\t\j\7\k\3\m\q\8\o\h\n\a\3\b\4\9\m\r\4\t\8\v\o\4\4\9\j\p\f\4\8\8\u\0\o\u\6\b\h\g\w\c\s\r\w\r\4\j\e\1\4\4\c\u\8\y\z\j\p\m\r\x\i\2\h\z\p\o\0\q\9\d\4\z\b\8\9\0\0\4\r\1\g\j\p\p\x\8\r\j\c\s\e\c\j\0\d\b\4\b\o\l\i\e\a\h\v\x\h\2\y\d\t\b\z\1\w\k\l\2\u\k\v\u\r\2\2\d\w\e\l\7\z\r\m\p\n\4\6\q\i\g\j\s\4\4\h\w\a\w\0\a\o\q\5\7\3\y\7\w\6\0\1\e\q\4\f\i\w\x\7\g\z\z\0\3\4\p\d\g\6\b\8\v\k\m\t\d\a\g\n\0\q\1\1\q\8\y\1\t\i\8\f\0\3\6\d\q\4\g\b\4\v\u\z\a\g\0\k\u\9\1\o\w\x\u\c\t\5\6\6\p\q\1\r\9\w\5\c\i\n\k\8\p\h\v\y\g\r\g\c\i\w\d\e\o\e\d\j\2\l\5\7\f\a\g\m\y\x\0\7\w\v\e\x\6\o\q\2\c\x\o\v\9 ]] 00:18:17.499 00:18:17.499 real 0m6.004s 00:18:17.499 user 0m4.856s 00:18:17.499 sys 0m0.791s 00:18:17.499 09:46:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:17.499 ************************************ 00:18:17.499 END TEST dd_flag_nofollow_forced_aio 00:18:17.499 ************************************ 00:18:17.499 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:18:17.499 09:46:40 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:18:17.499 09:46:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:17.499 09:46:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.499 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:18:17.756 ************************************ 00:18:17.756 START TEST dd_flag_noatime_forced_aio 00:18:17.756 ************************************ 00:18:17.756 09:46:40 -- common/autotest_common.sh@1111 -- # noatime 00:18:17.756 09:46:40 -- dd/posix.sh@53 -- # local atime_if 00:18:17.756 09:46:40 -- dd/posix.sh@54 -- # local atime_of 00:18:17.756 09:46:40 -- dd/posix.sh@58 -- # gen_bytes 512 00:18:17.756 09:46:40 -- dd/common.sh@98 -- # xtrace_disable 00:18:17.756 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:18:17.756 09:46:40 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:17.756 09:46:40 -- dd/posix.sh@60 -- # atime_if=1713433599 00:18:17.756 09:46:40 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:17.756 09:46:40 -- dd/posix.sh@61 -- # atime_of=1713433600 00:18:17.756 09:46:40 -- dd/posix.sh@66 -- # sleep 1 00:18:18.692 09:46:41 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:18.692 [2024-04-18 09:46:42.060050] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:18.692 [2024-04-18 09:46:42.060295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65327 ] 00:18:18.952 [2024-04-18 09:46:42.237331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.211 [2024-04-18 09:46:42.508043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.854  Copying: 512/512 [B] (average 500 kBps) 00:18:20.854 00:18:20.854 09:46:44 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:20.854 09:46:44 -- dd/posix.sh@69 -- # (( atime_if == 1713433599 )) 00:18:20.854 09:46:44 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:20.854 09:46:44 -- dd/posix.sh@70 -- # (( atime_of == 1713433600 )) 00:18:20.854 09:46:44 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:20.854 [2024-04-18 09:46:44.121205] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:20.854 [2024-04-18 09:46:44.121380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65356 ] 00:18:21.112 [2024-04-18 09:46:44.294169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.370 [2024-04-18 09:46:44.522673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.005  Copying: 512/512 [B] (average 500 kBps) 00:18:23.005 00:18:23.005 09:46:46 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:23.005 09:46:46 -- dd/posix.sh@73 -- # (( atime_if < 1713433604 )) 00:18:23.005 00:18:23.005 real 0m5.094s 00:18:23.005 user 0m3.304s 00:18:23.005 sys 0m0.542s 00:18:23.005 09:46:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:23.005 09:46:46 -- common/autotest_common.sh@10 -- # set +x 00:18:23.005 ************************************ 00:18:23.005 END TEST dd_flag_noatime_forced_aio 00:18:23.005 ************************************ 00:18:23.005 09:46:46 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:18:23.005 09:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:23.005 09:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.005 09:46:46 -- common/autotest_common.sh@10 -- # set +x 00:18:23.005 ************************************ 00:18:23.005 START TEST dd_flags_misc_forced_aio 00:18:23.005 ************************************ 00:18:23.005 09:46:46 -- common/autotest_common.sh@1111 -- # io 00:18:23.005 09:46:46 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:23.005 09:46:46 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:23.005 09:46:46 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:23.005 09:46:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:23.005 09:46:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:23.005 09:46:46 -- dd/common.sh@98 -- # xtrace_disable 00:18:23.005 09:46:46 -- common/autotest_common.sh@10 -- # set +x 00:18:23.005 09:46:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:23.005 09:46:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:23.005 [2024-04-18 09:46:46.249699] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:23.005 [2024-04-18 09:46:46.249869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65405 ] 00:18:23.263 [2024-04-18 09:46:46.421210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.522 [2024-04-18 09:46:46.661673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.153  Copying: 512/512 [B] (average 500 kBps) 00:18:25.153 00:18:25.153 09:46:48 -- dd/posix.sh@93 -- # [[ 9t58s4d2hk755hqv9uwg31vhetiz0pqo7vhh8zzdxm1fl94bs7ibfsesogbjz9kal49st8t3u9pg916lk9xrlaxj54le6d4owpfkr6o7sok3yvjpqxhguzk662eies32d2ogs7uavd432obd548lqm7tfmcouxatimqprpo735shg3y1nufb8gnivjdlm77u611f9jbng2mf7r7rz9ikui2lu76znclbam4nzdarr9ngao5qli2th4jjj89cak9n2zkhe2mloakyj4uloechngh24rq16z8baaoomxzy6ttxgfu9h9oo0htuv0wlchorq7bykay6axqp6ldaedc8bc1veygxw3sqk4l67bwtfmqxb7txv7iliu32ksefuk8rv1ndcch1uf1d31vhzqhl5c88sl9fzb4tvqwzamo1pgbnuqtws95nvacndnj77lmbq28kd8f8v8fuimva70w0hai0qqd6u1la5babmck5mphnxlmdk73755eregpdsbme == \9\t\5\8\s\4\d\2\h\k\7\5\5\h\q\v\9\u\w\g\3\1\v\h\e\t\i\z\0\p\q\o\7\v\h\h\8\z\z\d\x\m\1\f\l\9\4\b\s\7\i\b\f\s\e\s\o\g\b\j\z\9\k\a\l\4\9\s\t\8\t\3\u\9\p\g\9\1\6\l\k\9\x\r\l\a\x\j\5\4\l\e\6\d\4\o\w\p\f\k\r\6\o\7\s\o\k\3\y\v\j\p\q\x\h\g\u\z\k\6\6\2\e\i\e\s\3\2\d\2\o\g\s\7\u\a\v\d\4\3\2\o\b\d\5\4\8\l\q\m\7\t\f\m\c\o\u\x\a\t\i\m\q\p\r\p\o\7\3\5\s\h\g\3\y\1\n\u\f\b\8\g\n\i\v\j\d\l\m\7\7\u\6\1\1\f\9\j\b\n\g\2\m\f\7\r\7\r\z\9\i\k\u\i\2\l\u\7\6\z\n\c\l\b\a\m\4\n\z\d\a\r\r\9\n\g\a\o\5\q\l\i\2\t\h\4\j\j\j\8\9\c\a\k\9\n\2\z\k\h\e\2\m\l\o\a\k\y\j\4\u\l\o\e\c\h\n\g\h\2\4\r\q\1\6\z\8\b\a\a\o\o\m\x\z\y\6\t\t\x\g\f\u\9\h\9\o\o\0\h\t\u\v\0\w\l\c\h\o\r\q\7\b\y\k\a\y\6\a\x\q\p\6\l\d\a\e\d\c\8\b\c\1\v\e\y\g\x\w\3\s\q\k\4\l\6\7\b\w\t\f\m\q\x\b\7\t\x\v\7\i\l\i\u\3\2\k\s\e\f\u\k\8\r\v\1\n\d\c\c\h\1\u\f\1\d\3\1\v\h\z\q\h\l\5\c\8\8\s\l\9\f\z\b\4\t\v\q\w\z\a\m\o\1\p\g\b\n\u\q\t\w\s\9\5\n\v\a\c\n\d\n\j\7\7\l\m\b\q\2\8\k\d\8\f\8\v\8\f\u\i\m\v\a\7\0\w\0\h\a\i\0\q\q\d\6\u\1\l\a\5\b\a\b\m\c\k\5\m\p\h\n\x\l\m\d\k\7\3\7\5\5\e\r\e\g\p\d\s\b\m\e ]] 00:18:25.153 09:46:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:25.153 09:46:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:25.153 [2024-04-18 09:46:48.261733] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:25.153 [2024-04-18 09:46:48.261908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:18:25.153 [2024-04-18 09:46:48.434084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.412 [2024-04-18 09:46:48.670134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.045  Copying: 512/512 [B] (average 500 kBps) 00:18:27.045 00:18:27.045 09:46:50 -- dd/posix.sh@93 -- # [[ 9t58s4d2hk755hqv9uwg31vhetiz0pqo7vhh8zzdxm1fl94bs7ibfsesogbjz9kal49st8t3u9pg916lk9xrlaxj54le6d4owpfkr6o7sok3yvjpqxhguzk662eies32d2ogs7uavd432obd548lqm7tfmcouxatimqprpo735shg3y1nufb8gnivjdlm77u611f9jbng2mf7r7rz9ikui2lu76znclbam4nzdarr9ngao5qli2th4jjj89cak9n2zkhe2mloakyj4uloechngh24rq16z8baaoomxzy6ttxgfu9h9oo0htuv0wlchorq7bykay6axqp6ldaedc8bc1veygxw3sqk4l67bwtfmqxb7txv7iliu32ksefuk8rv1ndcch1uf1d31vhzqhl5c88sl9fzb4tvqwzamo1pgbnuqtws95nvacndnj77lmbq28kd8f8v8fuimva70w0hai0qqd6u1la5babmck5mphnxlmdk73755eregpdsbme == \9\t\5\8\s\4\d\2\h\k\7\5\5\h\q\v\9\u\w\g\3\1\v\h\e\t\i\z\0\p\q\o\7\v\h\h\8\z\z\d\x\m\1\f\l\9\4\b\s\7\i\b\f\s\e\s\o\g\b\j\z\9\k\a\l\4\9\s\t\8\t\3\u\9\p\g\9\1\6\l\k\9\x\r\l\a\x\j\5\4\l\e\6\d\4\o\w\p\f\k\r\6\o\7\s\o\k\3\y\v\j\p\q\x\h\g\u\z\k\6\6\2\e\i\e\s\3\2\d\2\o\g\s\7\u\a\v\d\4\3\2\o\b\d\5\4\8\l\q\m\7\t\f\m\c\o\u\x\a\t\i\m\q\p\r\p\o\7\3\5\s\h\g\3\y\1\n\u\f\b\8\g\n\i\v\j\d\l\m\7\7\u\6\1\1\f\9\j\b\n\g\2\m\f\7\r\7\r\z\9\i\k\u\i\2\l\u\7\6\z\n\c\l\b\a\m\4\n\z\d\a\r\r\9\n\g\a\o\5\q\l\i\2\t\h\4\j\j\j\8\9\c\a\k\9\n\2\z\k\h\e\2\m\l\o\a\k\y\j\4\u\l\o\e\c\h\n\g\h\2\4\r\q\1\6\z\8\b\a\a\o\o\m\x\z\y\6\t\t\x\g\f\u\9\h\9\o\o\0\h\t\u\v\0\w\l\c\h\o\r\q\7\b\y\k\a\y\6\a\x\q\p\6\l\d\a\e\d\c\8\b\c\1\v\e\y\g\x\w\3\s\q\k\4\l\6\7\b\w\t\f\m\q\x\b\7\t\x\v\7\i\l\i\u\3\2\k\s\e\f\u\k\8\r\v\1\n\d\c\c\h\1\u\f\1\d\3\1\v\h\z\q\h\l\5\c\8\8\s\l\9\f\z\b\4\t\v\q\w\z\a\m\o\1\p\g\b\n\u\q\t\w\s\9\5\n\v\a\c\n\d\n\j\7\7\l\m\b\q\2\8\k\d\8\f\8\v\8\f\u\i\m\v\a\7\0\w\0\h\a\i\0\q\q\d\6\u\1\l\a\5\b\a\b\m\c\k\5\m\p\h\n\x\l\m\d\k\7\3\7\5\5\e\r\e\g\p\d\s\b\m\e ]] 00:18:27.045 09:46:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:27.045 09:46:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:27.045 [2024-04-18 09:46:50.278053] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:27.045 [2024-04-18 09:46:50.278238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65450 ] 00:18:27.303 [2024-04-18 09:46:50.449095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.303 [2024-04-18 09:46:50.679534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.805  Copying: 512/512 [B] (average 125 kBps) 00:18:28.805 00:18:28.805 09:46:52 -- dd/posix.sh@93 -- # [[ 9t58s4d2hk755hqv9uwg31vhetiz0pqo7vhh8zzdxm1fl94bs7ibfsesogbjz9kal49st8t3u9pg916lk9xrlaxj54le6d4owpfkr6o7sok3yvjpqxhguzk662eies32d2ogs7uavd432obd548lqm7tfmcouxatimqprpo735shg3y1nufb8gnivjdlm77u611f9jbng2mf7r7rz9ikui2lu76znclbam4nzdarr9ngao5qli2th4jjj89cak9n2zkhe2mloakyj4uloechngh24rq16z8baaoomxzy6ttxgfu9h9oo0htuv0wlchorq7bykay6axqp6ldaedc8bc1veygxw3sqk4l67bwtfmqxb7txv7iliu32ksefuk8rv1ndcch1uf1d31vhzqhl5c88sl9fzb4tvqwzamo1pgbnuqtws95nvacndnj77lmbq28kd8f8v8fuimva70w0hai0qqd6u1la5babmck5mphnxlmdk73755eregpdsbme == \9\t\5\8\s\4\d\2\h\k\7\5\5\h\q\v\9\u\w\g\3\1\v\h\e\t\i\z\0\p\q\o\7\v\h\h\8\z\z\d\x\m\1\f\l\9\4\b\s\7\i\b\f\s\e\s\o\g\b\j\z\9\k\a\l\4\9\s\t\8\t\3\u\9\p\g\9\1\6\l\k\9\x\r\l\a\x\j\5\4\l\e\6\d\4\o\w\p\f\k\r\6\o\7\s\o\k\3\y\v\j\p\q\x\h\g\u\z\k\6\6\2\e\i\e\s\3\2\d\2\o\g\s\7\u\a\v\d\4\3\2\o\b\d\5\4\8\l\q\m\7\t\f\m\c\o\u\x\a\t\i\m\q\p\r\p\o\7\3\5\s\h\g\3\y\1\n\u\f\b\8\g\n\i\v\j\d\l\m\7\7\u\6\1\1\f\9\j\b\n\g\2\m\f\7\r\7\r\z\9\i\k\u\i\2\l\u\7\6\z\n\c\l\b\a\m\4\n\z\d\a\r\r\9\n\g\a\o\5\q\l\i\2\t\h\4\j\j\j\8\9\c\a\k\9\n\2\z\k\h\e\2\m\l\o\a\k\y\j\4\u\l\o\e\c\h\n\g\h\2\4\r\q\1\6\z\8\b\a\a\o\o\m\x\z\y\6\t\t\x\g\f\u\9\h\9\o\o\0\h\t\u\v\0\w\l\c\h\o\r\q\7\b\y\k\a\y\6\a\x\q\p\6\l\d\a\e\d\c\8\b\c\1\v\e\y\g\x\w\3\s\q\k\4\l\6\7\b\w\t\f\m\q\x\b\7\t\x\v\7\i\l\i\u\3\2\k\s\e\f\u\k\8\r\v\1\n\d\c\c\h\1\u\f\1\d\3\1\v\h\z\q\h\l\5\c\8\8\s\l\9\f\z\b\4\t\v\q\w\z\a\m\o\1\p\g\b\n\u\q\t\w\s\9\5\n\v\a\c\n\d\n\j\7\7\l\m\b\q\2\8\k\d\8\f\8\v\8\f\u\i\m\v\a\7\0\w\0\h\a\i\0\q\q\d\6\u\1\l\a\5\b\a\b\m\c\k\5\m\p\h\n\x\l\m\d\k\7\3\7\5\5\e\r\e\g\p\d\s\b\m\e ]] 00:18:28.805 09:46:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:28.805 09:46:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:29.064 [2024-04-18 09:46:52.293174] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:29.064 [2024-04-18 09:46:52.293355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65475 ] 00:18:29.323 [2024-04-18 09:46:52.468237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.323 [2024-04-18 09:46:52.699417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.832  Copying: 512/512 [B] (average 250 kBps) 00:18:30.832 00:18:31.091 09:46:54 -- dd/posix.sh@93 -- # [[ 9t58s4d2hk755hqv9uwg31vhetiz0pqo7vhh8zzdxm1fl94bs7ibfsesogbjz9kal49st8t3u9pg916lk9xrlaxj54le6d4owpfkr6o7sok3yvjpqxhguzk662eies32d2ogs7uavd432obd548lqm7tfmcouxatimqprpo735shg3y1nufb8gnivjdlm77u611f9jbng2mf7r7rz9ikui2lu76znclbam4nzdarr9ngao5qli2th4jjj89cak9n2zkhe2mloakyj4uloechngh24rq16z8baaoomxzy6ttxgfu9h9oo0htuv0wlchorq7bykay6axqp6ldaedc8bc1veygxw3sqk4l67bwtfmqxb7txv7iliu32ksefuk8rv1ndcch1uf1d31vhzqhl5c88sl9fzb4tvqwzamo1pgbnuqtws95nvacndnj77lmbq28kd8f8v8fuimva70w0hai0qqd6u1la5babmck5mphnxlmdk73755eregpdsbme == \9\t\5\8\s\4\d\2\h\k\7\5\5\h\q\v\9\u\w\g\3\1\v\h\e\t\i\z\0\p\q\o\7\v\h\h\8\z\z\d\x\m\1\f\l\9\4\b\s\7\i\b\f\s\e\s\o\g\b\j\z\9\k\a\l\4\9\s\t\8\t\3\u\9\p\g\9\1\6\l\k\9\x\r\l\a\x\j\5\4\l\e\6\d\4\o\w\p\f\k\r\6\o\7\s\o\k\3\y\v\j\p\q\x\h\g\u\z\k\6\6\2\e\i\e\s\3\2\d\2\o\g\s\7\u\a\v\d\4\3\2\o\b\d\5\4\8\l\q\m\7\t\f\m\c\o\u\x\a\t\i\m\q\p\r\p\o\7\3\5\s\h\g\3\y\1\n\u\f\b\8\g\n\i\v\j\d\l\m\7\7\u\6\1\1\f\9\j\b\n\g\2\m\f\7\r\7\r\z\9\i\k\u\i\2\l\u\7\6\z\n\c\l\b\a\m\4\n\z\d\a\r\r\9\n\g\a\o\5\q\l\i\2\t\h\4\j\j\j\8\9\c\a\k\9\n\2\z\k\h\e\2\m\l\o\a\k\y\j\4\u\l\o\e\c\h\n\g\h\2\4\r\q\1\6\z\8\b\a\a\o\o\m\x\z\y\6\t\t\x\g\f\u\9\h\9\o\o\0\h\t\u\v\0\w\l\c\h\o\r\q\7\b\y\k\a\y\6\a\x\q\p\6\l\d\a\e\d\c\8\b\c\1\v\e\y\g\x\w\3\s\q\k\4\l\6\7\b\w\t\f\m\q\x\b\7\t\x\v\7\i\l\i\u\3\2\k\s\e\f\u\k\8\r\v\1\n\d\c\c\h\1\u\f\1\d\3\1\v\h\z\q\h\l\5\c\8\8\s\l\9\f\z\b\4\t\v\q\w\z\a\m\o\1\p\g\b\n\u\q\t\w\s\9\5\n\v\a\c\n\d\n\j\7\7\l\m\b\q\2\8\k\d\8\f\8\v\8\f\u\i\m\v\a\7\0\w\0\h\a\i\0\q\q\d\6\u\1\l\a\5\b\a\b\m\c\k\5\m\p\h\n\x\l\m\d\k\7\3\7\5\5\e\r\e\g\p\d\s\b\m\e ]] 00:18:31.091 09:46:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:31.091 09:46:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:31.091 09:46:54 -- dd/common.sh@98 -- # xtrace_disable 00:18:31.091 09:46:54 -- common/autotest_common.sh@10 -- # set +x 00:18:31.091 09:46:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:31.091 09:46:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:31.091 [2024-04-18 09:46:54.348560] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:31.091 [2024-04-18 09:46:54.348763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65500 ] 00:18:31.350 [2024-04-18 09:46:54.521185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.609 [2024-04-18 09:46:54.755773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.244  Copying: 512/512 [B] (average 500 kBps) 00:18:33.244 00:18:33.244 09:46:56 -- dd/posix.sh@93 -- # [[ 8fqbvnc7s1f04fxkavvjbqhl5icjktdn23wznrzz3dulnveag4vighr5mg59ex4ljf3pt0p7p2dixfoaxkixo2ohly9ttm2466n11s8w7wqus4ujmzfc2dgklu04fg1oc3nsvzneov5pydf3sb35888m8dbebsu51p36buokm5x840fwo5ttm4u86moexwb39kxukw4g1i5n8eqo1vm96mwwr7foh2352wl3bmrhst6347mlzw6q4effu82fmdgu036s5pved38kiaih01o160g5oqscrbmge3wj4b3nqvfwxiu07gxicylwv1is9y6pufv5rqu5l0dw5q3pjffvzpjnrpxpz8h1w47estb0nerwx6dwyuyny258i7koknnn3k72tj74n2zvjis1ndlx8atmiidek96cy6wdu33jvgmsa79q62oy84yz74jzia1htblu0d1b6ooyqwc86zufmv79xztpavqr6pjzmlkuyky111loirmtp5pe8zy3ag0p == \8\f\q\b\v\n\c\7\s\1\f\0\4\f\x\k\a\v\v\j\b\q\h\l\5\i\c\j\k\t\d\n\2\3\w\z\n\r\z\z\3\d\u\l\n\v\e\a\g\4\v\i\g\h\r\5\m\g\5\9\e\x\4\l\j\f\3\p\t\0\p\7\p\2\d\i\x\f\o\a\x\k\i\x\o\2\o\h\l\y\9\t\t\m\2\4\6\6\n\1\1\s\8\w\7\w\q\u\s\4\u\j\m\z\f\c\2\d\g\k\l\u\0\4\f\g\1\o\c\3\n\s\v\z\n\e\o\v\5\p\y\d\f\3\s\b\3\5\8\8\8\m\8\d\b\e\b\s\u\5\1\p\3\6\b\u\o\k\m\5\x\8\4\0\f\w\o\5\t\t\m\4\u\8\6\m\o\e\x\w\b\3\9\k\x\u\k\w\4\g\1\i\5\n\8\e\q\o\1\v\m\9\6\m\w\w\r\7\f\o\h\2\3\5\2\w\l\3\b\m\r\h\s\t\6\3\4\7\m\l\z\w\6\q\4\e\f\f\u\8\2\f\m\d\g\u\0\3\6\s\5\p\v\e\d\3\8\k\i\a\i\h\0\1\o\1\6\0\g\5\o\q\s\c\r\b\m\g\e\3\w\j\4\b\3\n\q\v\f\w\x\i\u\0\7\g\x\i\c\y\l\w\v\1\i\s\9\y\6\p\u\f\v\5\r\q\u\5\l\0\d\w\5\q\3\p\j\f\f\v\z\p\j\n\r\p\x\p\z\8\h\1\w\4\7\e\s\t\b\0\n\e\r\w\x\6\d\w\y\u\y\n\y\2\5\8\i\7\k\o\k\n\n\n\3\k\7\2\t\j\7\4\n\2\z\v\j\i\s\1\n\d\l\x\8\a\t\m\i\i\d\e\k\9\6\c\y\6\w\d\u\3\3\j\v\g\m\s\a\7\9\q\6\2\o\y\8\4\y\z\7\4\j\z\i\a\1\h\t\b\l\u\0\d\1\b\6\o\o\y\q\w\c\8\6\z\u\f\m\v\7\9\x\z\t\p\a\v\q\r\6\p\j\z\m\l\k\u\y\k\y\1\1\1\l\o\i\r\m\t\p\5\p\e\8\z\y\3\a\g\0\p ]] 00:18:33.244 09:46:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:33.244 09:46:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:33.244 [2024-04-18 09:46:56.387554] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:33.244 [2024-04-18 09:46:56.387705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65525 ] 00:18:33.244 [2024-04-18 09:46:56.553841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.503 [2024-04-18 09:46:56.794575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.198  Copying: 512/512 [B] (average 500 kBps) 00:18:35.198 00:18:35.198 09:46:58 -- dd/posix.sh@93 -- # [[ 8fqbvnc7s1f04fxkavvjbqhl5icjktdn23wznrzz3dulnveag4vighr5mg59ex4ljf3pt0p7p2dixfoaxkixo2ohly9ttm2466n11s8w7wqus4ujmzfc2dgklu04fg1oc3nsvzneov5pydf3sb35888m8dbebsu51p36buokm5x840fwo5ttm4u86moexwb39kxukw4g1i5n8eqo1vm96mwwr7foh2352wl3bmrhst6347mlzw6q4effu82fmdgu036s5pved38kiaih01o160g5oqscrbmge3wj4b3nqvfwxiu07gxicylwv1is9y6pufv5rqu5l0dw5q3pjffvzpjnrpxpz8h1w47estb0nerwx6dwyuyny258i7koknnn3k72tj74n2zvjis1ndlx8atmiidek96cy6wdu33jvgmsa79q62oy84yz74jzia1htblu0d1b6ooyqwc86zufmv79xztpavqr6pjzmlkuyky111loirmtp5pe8zy3ag0p == \8\f\q\b\v\n\c\7\s\1\f\0\4\f\x\k\a\v\v\j\b\q\h\l\5\i\c\j\k\t\d\n\2\3\w\z\n\r\z\z\3\d\u\l\n\v\e\a\g\4\v\i\g\h\r\5\m\g\5\9\e\x\4\l\j\f\3\p\t\0\p\7\p\2\d\i\x\f\o\a\x\k\i\x\o\2\o\h\l\y\9\t\t\m\2\4\6\6\n\1\1\s\8\w\7\w\q\u\s\4\u\j\m\z\f\c\2\d\g\k\l\u\0\4\f\g\1\o\c\3\n\s\v\z\n\e\o\v\5\p\y\d\f\3\s\b\3\5\8\8\8\m\8\d\b\e\b\s\u\5\1\p\3\6\b\u\o\k\m\5\x\8\4\0\f\w\o\5\t\t\m\4\u\8\6\m\o\e\x\w\b\3\9\k\x\u\k\w\4\g\1\i\5\n\8\e\q\o\1\v\m\9\6\m\w\w\r\7\f\o\h\2\3\5\2\w\l\3\b\m\r\h\s\t\6\3\4\7\m\l\z\w\6\q\4\e\f\f\u\8\2\f\m\d\g\u\0\3\6\s\5\p\v\e\d\3\8\k\i\a\i\h\0\1\o\1\6\0\g\5\o\q\s\c\r\b\m\g\e\3\w\j\4\b\3\n\q\v\f\w\x\i\u\0\7\g\x\i\c\y\l\w\v\1\i\s\9\y\6\p\u\f\v\5\r\q\u\5\l\0\d\w\5\q\3\p\j\f\f\v\z\p\j\n\r\p\x\p\z\8\h\1\w\4\7\e\s\t\b\0\n\e\r\w\x\6\d\w\y\u\y\n\y\2\5\8\i\7\k\o\k\n\n\n\3\k\7\2\t\j\7\4\n\2\z\v\j\i\s\1\n\d\l\x\8\a\t\m\i\i\d\e\k\9\6\c\y\6\w\d\u\3\3\j\v\g\m\s\a\7\9\q\6\2\o\y\8\4\y\z\7\4\j\z\i\a\1\h\t\b\l\u\0\d\1\b\6\o\o\y\q\w\c\8\6\z\u\f\m\v\7\9\x\z\t\p\a\v\q\r\6\p\j\z\m\l\k\u\y\k\y\1\1\1\l\o\i\r\m\t\p\5\p\e\8\z\y\3\a\g\0\p ]] 00:18:35.198 09:46:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:35.198 09:46:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:35.198 [2024-04-18 09:46:58.370552] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:35.198 [2024-04-18 09:46:58.370725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65550 ] 00:18:35.198 [2024-04-18 09:46:58.542924] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.457 [2024-04-18 09:46:58.774620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.091  Copying: 512/512 [B] (average 250 kBps) 00:18:37.091 00:18:37.091 09:47:00 -- dd/posix.sh@93 -- # [[ 8fqbvnc7s1f04fxkavvjbqhl5icjktdn23wznrzz3dulnveag4vighr5mg59ex4ljf3pt0p7p2dixfoaxkixo2ohly9ttm2466n11s8w7wqus4ujmzfc2dgklu04fg1oc3nsvzneov5pydf3sb35888m8dbebsu51p36buokm5x840fwo5ttm4u86moexwb39kxukw4g1i5n8eqo1vm96mwwr7foh2352wl3bmrhst6347mlzw6q4effu82fmdgu036s5pved38kiaih01o160g5oqscrbmge3wj4b3nqvfwxiu07gxicylwv1is9y6pufv5rqu5l0dw5q3pjffvzpjnrpxpz8h1w47estb0nerwx6dwyuyny258i7koknnn3k72tj74n2zvjis1ndlx8atmiidek96cy6wdu33jvgmsa79q62oy84yz74jzia1htblu0d1b6ooyqwc86zufmv79xztpavqr6pjzmlkuyky111loirmtp5pe8zy3ag0p == \8\f\q\b\v\n\c\7\s\1\f\0\4\f\x\k\a\v\v\j\b\q\h\l\5\i\c\j\k\t\d\n\2\3\w\z\n\r\z\z\3\d\u\l\n\v\e\a\g\4\v\i\g\h\r\5\m\g\5\9\e\x\4\l\j\f\3\p\t\0\p\7\p\2\d\i\x\f\o\a\x\k\i\x\o\2\o\h\l\y\9\t\t\m\2\4\6\6\n\1\1\s\8\w\7\w\q\u\s\4\u\j\m\z\f\c\2\d\g\k\l\u\0\4\f\g\1\o\c\3\n\s\v\z\n\e\o\v\5\p\y\d\f\3\s\b\3\5\8\8\8\m\8\d\b\e\b\s\u\5\1\p\3\6\b\u\o\k\m\5\x\8\4\0\f\w\o\5\t\t\m\4\u\8\6\m\o\e\x\w\b\3\9\k\x\u\k\w\4\g\1\i\5\n\8\e\q\o\1\v\m\9\6\m\w\w\r\7\f\o\h\2\3\5\2\w\l\3\b\m\r\h\s\t\6\3\4\7\m\l\z\w\6\q\4\e\f\f\u\8\2\f\m\d\g\u\0\3\6\s\5\p\v\e\d\3\8\k\i\a\i\h\0\1\o\1\6\0\g\5\o\q\s\c\r\b\m\g\e\3\w\j\4\b\3\n\q\v\f\w\x\i\u\0\7\g\x\i\c\y\l\w\v\1\i\s\9\y\6\p\u\f\v\5\r\q\u\5\l\0\d\w\5\q\3\p\j\f\f\v\z\p\j\n\r\p\x\p\z\8\h\1\w\4\7\e\s\t\b\0\n\e\r\w\x\6\d\w\y\u\y\n\y\2\5\8\i\7\k\o\k\n\n\n\3\k\7\2\t\j\7\4\n\2\z\v\j\i\s\1\n\d\l\x\8\a\t\m\i\i\d\e\k\9\6\c\y\6\w\d\u\3\3\j\v\g\m\s\a\7\9\q\6\2\o\y\8\4\y\z\7\4\j\z\i\a\1\h\t\b\l\u\0\d\1\b\6\o\o\y\q\w\c\8\6\z\u\f\m\v\7\9\x\z\t\p\a\v\q\r\6\p\j\z\m\l\k\u\y\k\y\1\1\1\l\o\i\r\m\t\p\5\p\e\8\z\y\3\a\g\0\p ]] 00:18:37.091 09:47:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:37.091 09:47:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:37.091 [2024-04-18 09:47:00.324731] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:37.091 [2024-04-18 09:47:00.324909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65575 ] 00:18:37.350 [2024-04-18 09:47:00.497778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.350 [2024-04-18 09:47:00.733514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.855  Copying: 512/512 [B] (average 166 kBps) 00:18:38.855 00:18:38.855 ************************************ 00:18:38.855 END TEST dd_flags_misc_forced_aio 00:18:38.855 ************************************ 00:18:38.856 09:47:02 -- dd/posix.sh@93 -- # [[ 8fqbvnc7s1f04fxkavvjbqhl5icjktdn23wznrzz3dulnveag4vighr5mg59ex4ljf3pt0p7p2dixfoaxkixo2ohly9ttm2466n11s8w7wqus4ujmzfc2dgklu04fg1oc3nsvzneov5pydf3sb35888m8dbebsu51p36buokm5x840fwo5ttm4u86moexwb39kxukw4g1i5n8eqo1vm96mwwr7foh2352wl3bmrhst6347mlzw6q4effu82fmdgu036s5pved38kiaih01o160g5oqscrbmge3wj4b3nqvfwxiu07gxicylwv1is9y6pufv5rqu5l0dw5q3pjffvzpjnrpxpz8h1w47estb0nerwx6dwyuyny258i7koknnn3k72tj74n2zvjis1ndlx8atmiidek96cy6wdu33jvgmsa79q62oy84yz74jzia1htblu0d1b6ooyqwc86zufmv79xztpavqr6pjzmlkuyky111loirmtp5pe8zy3ag0p == \8\f\q\b\v\n\c\7\s\1\f\0\4\f\x\k\a\v\v\j\b\q\h\l\5\i\c\j\k\t\d\n\2\3\w\z\n\r\z\z\3\d\u\l\n\v\e\a\g\4\v\i\g\h\r\5\m\g\5\9\e\x\4\l\j\f\3\p\t\0\p\7\p\2\d\i\x\f\o\a\x\k\i\x\o\2\o\h\l\y\9\t\t\m\2\4\6\6\n\1\1\s\8\w\7\w\q\u\s\4\u\j\m\z\f\c\2\d\g\k\l\u\0\4\f\g\1\o\c\3\n\s\v\z\n\e\o\v\5\p\y\d\f\3\s\b\3\5\8\8\8\m\8\d\b\e\b\s\u\5\1\p\3\6\b\u\o\k\m\5\x\8\4\0\f\w\o\5\t\t\m\4\u\8\6\m\o\e\x\w\b\3\9\k\x\u\k\w\4\g\1\i\5\n\8\e\q\o\1\v\m\9\6\m\w\w\r\7\f\o\h\2\3\5\2\w\l\3\b\m\r\h\s\t\6\3\4\7\m\l\z\w\6\q\4\e\f\f\u\8\2\f\m\d\g\u\0\3\6\s\5\p\v\e\d\3\8\k\i\a\i\h\0\1\o\1\6\0\g\5\o\q\s\c\r\b\m\g\e\3\w\j\4\b\3\n\q\v\f\w\x\i\u\0\7\g\x\i\c\y\l\w\v\1\i\s\9\y\6\p\u\f\v\5\r\q\u\5\l\0\d\w\5\q\3\p\j\f\f\v\z\p\j\n\r\p\x\p\z\8\h\1\w\4\7\e\s\t\b\0\n\e\r\w\x\6\d\w\y\u\y\n\y\2\5\8\i\7\k\o\k\n\n\n\3\k\7\2\t\j\7\4\n\2\z\v\j\i\s\1\n\d\l\x\8\a\t\m\i\i\d\e\k\9\6\c\y\6\w\d\u\3\3\j\v\g\m\s\a\7\9\q\6\2\o\y\8\4\y\z\7\4\j\z\i\a\1\h\t\b\l\u\0\d\1\b\6\o\o\y\q\w\c\8\6\z\u\f\m\v\7\9\x\z\t\p\a\v\q\r\6\p\j\z\m\l\k\u\y\k\y\1\1\1\l\o\i\r\m\t\p\5\p\e\8\z\y\3\a\g\0\p ]] 00:18:38.856 00:18:38.856 real 0m16.043s 00:18:38.856 user 0m12.952s 00:18:38.856 sys 0m2.019s 00:18:38.856 09:47:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.856 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:18:38.856 09:47:02 -- dd/posix.sh@1 -- # cleanup 00:18:38.856 09:47:02 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:38.856 09:47:02 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:38.856 ************************************ 00:18:38.856 END TEST spdk_dd_posix 00:18:38.856 ************************************ 00:18:38.856 00:18:38.856 real 1m7.560s 00:18:38.856 user 0m52.464s 00:18:38.856 sys 0m18.009s 00:18:38.856 09:47:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.856 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.114 09:47:02 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:39.114 09:47:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:39.114 09:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.114 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.114 ************************************ 00:18:39.114 START TEST spdk_dd_malloc 00:18:39.114 ************************************ 00:18:39.114 09:47:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:39.114 * Looking for test storage... 00:18:39.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:39.114 09:47:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.114 09:47:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.114 09:47:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.114 09:47:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.114 09:47:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.114 09:47:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.114 09:47:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.114 09:47:02 -- paths/export.sh@5 -- # export PATH 00:18:39.114 09:47:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.114 09:47:02 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:18:39.114 09:47:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:39.114 09:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.114 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.373 ************************************ 00:18:39.373 START TEST dd_malloc_copy 00:18:39.373 ************************************ 00:18:39.373 09:47:02 -- common/autotest_common.sh@1111 -- # malloc_copy 00:18:39.373 09:47:02 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:18:39.373 09:47:02 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:18:39.373 09:47:02 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:18:39.373 09:47:02 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:18:39.373 09:47:02 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:18:39.373 09:47:02 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:18:39.373 09:47:02 -- dd/malloc.sh@28 -- # gen_conf 00:18:39.373 09:47:02 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:18:39.373 09:47:02 -- dd/common.sh@31 -- # xtrace_disable 00:18:39.373 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.373 { 00:18:39.373 "subsystems": [ 00:18:39.373 { 00:18:39.373 "subsystem": "bdev", 00:18:39.373 "config": [ 00:18:39.373 { 00:18:39.373 "params": { 00:18:39.373 "block_size": 512, 00:18:39.373 "num_blocks": 1048576, 00:18:39.373 "name": "malloc0" 00:18:39.373 }, 00:18:39.373 "method": "bdev_malloc_create" 00:18:39.373 }, 00:18:39.373 { 00:18:39.373 "params": { 00:18:39.373 "block_size": 512, 00:18:39.373 "num_blocks": 1048576, 00:18:39.373 "name": "malloc1" 00:18:39.373 }, 00:18:39.373 "method": "bdev_malloc_create" 00:18:39.373 }, 00:18:39.373 { 00:18:39.373 "method": "bdev_wait_for_examine" 00:18:39.373 } 00:18:39.373 ] 00:18:39.373 } 00:18:39.373 ] 00:18:39.373 } 00:18:39.373 [2024-04-18 09:47:02.632915] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:39.373 [2024-04-18 09:47:02.633414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65670 ] 00:18:39.632 [2024-04-18 09:47:02.808934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.899 [2024-04-18 09:47:03.077978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.821  Copying: 159/512 [MB] (159 MBps) Copying: 316/512 [MB] (157 MBps) Copying: 475/512 [MB] (159 MBps) Copying: 512/512 [MB] (average 158 MBps) 00:18:48.821 00:18:48.821 09:47:11 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:18:48.821 09:47:11 -- dd/malloc.sh@33 -- # gen_conf 00:18:48.821 09:47:11 -- dd/common.sh@31 -- # xtrace_disable 00:18:48.821 09:47:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.821 { 00:18:48.821 "subsystems": [ 00:18:48.821 { 00:18:48.821 "subsystem": "bdev", 00:18:48.821 "config": [ 00:18:48.821 { 00:18:48.821 "params": { 00:18:48.821 "block_size": 512, 00:18:48.821 "num_blocks": 1048576, 00:18:48.821 "name": "malloc0" 00:18:48.821 }, 00:18:48.821 "method": "bdev_malloc_create" 00:18:48.821 }, 00:18:48.821 { 00:18:48.821 "params": { 00:18:48.821 "block_size": 512, 00:18:48.821 "num_blocks": 1048576, 00:18:48.821 "name": "malloc1" 00:18:48.821 }, 00:18:48.821 "method": "bdev_malloc_create" 00:18:48.821 }, 00:18:48.821 { 00:18:48.821 "method": "bdev_wait_for_examine" 00:18:48.821 } 00:18:48.821 ] 00:18:48.821 } 00:18:48.821 ] 00:18:48.821 } 00:18:48.821 [2024-04-18 09:47:11.544965] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:48.821 [2024-04-18 09:47:11.545179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65774 ] 00:18:48.822 [2024-04-18 09:47:11.721206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.822 [2024-04-18 09:47:11.963936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.422  Copying: 159/512 [MB] (159 MBps) Copying: 321/512 [MB] (162 MBps) Copying: 483/512 [MB] (161 MBps) Copying: 512/512 [MB] (average 161 MBps) 00:18:57.422 00:18:57.422 00:18:57.422 real 0m17.690s 00:18:57.422 user 0m16.153s 00:18:57.422 sys 0m1.319s 00:18:57.422 ************************************ 00:18:57.422 END TEST dd_malloc_copy 00:18:57.422 ************************************ 00:18:57.422 09:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:57.422 09:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:57.422 ************************************ 00:18:57.422 END TEST spdk_dd_malloc 00:18:57.422 ************************************ 00:18:57.422 00:18:57.422 real 0m17.911s 00:18:57.422 user 0m16.251s 00:18:57.422 sys 0m1.430s 00:18:57.422 09:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:57.422 09:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:57.422 09:47:20 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:18:57.422 09:47:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:57.422 09:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.422 09:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:57.422 ************************************ 00:18:57.422 START TEST spdk_dd_bdev_to_bdev 00:18:57.422 ************************************ 00:18:57.422 09:47:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:18:57.422 * Looking for test storage... 00:18:57.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:57.422 09:47:20 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.422 09:47:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.422 09:47:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.422 09:47:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.423 09:47:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.423 09:47:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.423 09:47:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.423 09:47:20 -- paths/export.sh@5 -- # export PATH 00:18:57.423 09:47:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:18:57.423 09:47:20 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:57.423 09:47:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:57.423 09:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.423 09:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:57.423 ************************************ 00:18:57.423 START TEST dd_inflate_file 00:18:57.423 ************************************ 00:18:57.423 09:47:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:57.423 [2024-04-18 09:47:20.662600] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:57.423 [2024-04-18 09:47:20.662778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65950 ] 00:18:57.682 [2024-04-18 09:47:20.834914] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.941 [2024-04-18 09:47:21.073702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.194  Copying: 64/64 [MB] (average 2000 MBps) 00:18:59.194 00:18:59.194 ************************************ 00:18:59.194 END TEST dd_inflate_file 00:18:59.194 00:18:59.194 real 0m2.017s 00:18:59.194 user 0m1.629s 00:18:59.194 sys 0m1.076s 00:18:59.194 09:47:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:59.194 09:47:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.194 ************************************ 00:18:59.452 09:47:22 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:18:59.452 09:47:22 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:18:59.452 09:47:22 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:18:59.452 09:47:22 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:18:59.452 09:47:22 -- dd/common.sh@31 -- # xtrace_disable 00:18:59.452 09:47:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:59.452 09:47:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.452 09:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.452 09:47:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.452 { 00:18:59.452 "subsystems": [ 00:18:59.452 { 00:18:59.452 "subsystem": "bdev", 00:18:59.452 "config": [ 00:18:59.452 { 00:18:59.452 "params": { 00:18:59.452 "trtype": "pcie", 00:18:59.452 "traddr": "0000:00:10.0", 00:18:59.452 "name": "Nvme0" 00:18:59.452 }, 00:18:59.452 "method": "bdev_nvme_attach_controller" 00:18:59.452 }, 00:18:59.452 { 00:18:59.452 "params": { 00:18:59.452 "trtype": "pcie", 00:18:59.452 "traddr": "0000:00:11.0", 00:18:59.452 "name": "Nvme1" 00:18:59.452 }, 00:18:59.452 "method": "bdev_nvme_attach_controller" 00:18:59.452 }, 00:18:59.452 { 00:18:59.452 "method": "bdev_wait_for_examine" 00:18:59.452 } 00:18:59.452 ] 00:18:59.452 } 00:18:59.452 ] 00:18:59.452 } 00:18:59.452 ************************************ 00:18:59.452 START TEST dd_copy_to_out_bdev 00:18:59.452 ************************************ 00:18:59.452 09:47:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:18:59.453 [2024-04-18 09:47:22.803695] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:59.453 [2024-04-18 09:47:22.803879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66007 ] 00:18:59.711 [2024-04-18 09:47:22.976879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.970 [2024-04-18 09:47:23.239546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.978  Copying: 54/64 [MB] (54 MBps) Copying: 64/64 [MB] (average 54 MBps) 00:19:02.978 00:19:02.978 00:19:02.978 real 0m3.366s 00:19:02.978 user 0m2.943s 00:19:02.978 sys 0m2.259s 00:19:02.978 09:47:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:02.978 ************************************ 00:19:02.978 END TEST dd_copy_to_out_bdev 00:19:02.978 ************************************ 00:19:02.978 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.978 09:47:26 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:19:02.978 09:47:26 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:19:02.978 09:47:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:02.978 09:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:02.978 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.978 ************************************ 00:19:02.978 START TEST dd_offset_magic 00:19:02.978 ************************************ 00:19:02.978 09:47:26 -- common/autotest_common.sh@1111 -- # offset_magic 00:19:02.978 09:47:26 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:19:02.978 09:47:26 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:19:02.978 09:47:26 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:19:02.978 09:47:26 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:02.979 09:47:26 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:19:02.979 09:47:26 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:02.979 09:47:26 -- dd/common.sh@31 -- # xtrace_disable 00:19:02.979 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.979 { 00:19:02.979 "subsystems": [ 00:19:02.979 { 00:19:02.979 "subsystem": "bdev", 00:19:02.979 "config": [ 00:19:02.979 { 00:19:02.979 "params": { 00:19:02.979 "trtype": "pcie", 00:19:02.979 "traddr": "0000:00:10.0", 00:19:02.979 "name": "Nvme0" 00:19:02.979 }, 00:19:02.979 "method": "bdev_nvme_attach_controller" 00:19:02.979 }, 00:19:02.979 { 00:19:02.979 "params": { 00:19:02.979 "trtype": "pcie", 00:19:02.979 "traddr": "0000:00:11.0", 00:19:02.979 "name": "Nvme1" 00:19:02.979 }, 00:19:02.979 "method": "bdev_nvme_attach_controller" 00:19:02.979 }, 00:19:02.979 { 00:19:02.979 "method": "bdev_wait_for_examine" 00:19:02.979 } 00:19:02.979 ] 00:19:02.979 } 00:19:02.979 ] 00:19:02.979 } 00:19:02.979 [2024-04-18 09:47:26.296642] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:02.979 [2024-04-18 09:47:26.296888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66066 ] 00:19:03.237 [2024-04-18 09:47:26.470805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.496 [2024-04-18 09:47:26.712209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.999  Copying: 65/65 [MB] (average 955 MBps) 00:19:04.999 00:19:04.999 09:47:28 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:19:04.999 09:47:28 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:04.999 09:47:28 -- dd/common.sh@31 -- # xtrace_disable 00:19:04.999 09:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:04.999 { 00:19:04.999 "subsystems": [ 00:19:04.999 { 00:19:04.999 "subsystem": "bdev", 00:19:04.999 "config": [ 00:19:04.999 { 00:19:04.999 "params": { 00:19:04.999 "trtype": "pcie", 00:19:04.999 "traddr": "0000:00:10.0", 00:19:04.999 "name": "Nvme0" 00:19:04.999 }, 00:19:04.999 "method": "bdev_nvme_attach_controller" 00:19:04.999 }, 00:19:04.999 { 00:19:04.999 "params": { 00:19:04.999 "trtype": "pcie", 00:19:04.999 "traddr": "0000:00:11.0", 00:19:04.999 "name": "Nvme1" 00:19:04.999 }, 00:19:04.999 "method": "bdev_nvme_attach_controller" 00:19:04.999 }, 00:19:04.999 { 00:19:04.999 "method": "bdev_wait_for_examine" 00:19:04.999 } 00:19:04.999 ] 00:19:04.999 } 00:19:04.999 ] 00:19:04.999 } 00:19:05.258 [2024-04-18 09:47:28.415991] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:05.258 [2024-04-18 09:47:28.416190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66111 ] 00:19:05.258 [2024-04-18 09:47:28.590083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.526 [2024-04-18 09:47:28.830656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.045  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:07.045 00:19:07.045 09:47:30 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:07.045 09:47:30 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:07.045 09:47:30 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:07.046 09:47:30 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:19:07.046 09:47:30 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:07.046 09:47:30 -- dd/common.sh@31 -- # xtrace_disable 00:19:07.046 09:47:30 -- common/autotest_common.sh@10 -- # set +x 00:19:07.305 { 00:19:07.305 "subsystems": [ 00:19:07.305 { 00:19:07.305 "subsystem": "bdev", 00:19:07.305 "config": [ 00:19:07.305 { 00:19:07.305 "params": { 00:19:07.305 "trtype": "pcie", 00:19:07.305 "traddr": "0000:00:10.0", 00:19:07.305 "name": "Nvme0" 00:19:07.305 }, 00:19:07.305 "method": "bdev_nvme_attach_controller" 00:19:07.305 }, 00:19:07.305 { 00:19:07.305 "params": { 00:19:07.305 "trtype": "pcie", 00:19:07.305 "traddr": "0000:00:11.0", 00:19:07.305 "name": "Nvme1" 00:19:07.305 }, 00:19:07.305 "method": "bdev_nvme_attach_controller" 00:19:07.305 }, 00:19:07.305 { 00:19:07.305 "method": "bdev_wait_for_examine" 00:19:07.305 } 00:19:07.305 ] 00:19:07.305 } 00:19:07.305 ] 00:19:07.305 } 00:19:07.305 [2024-04-18 09:47:30.512617] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:07.305 [2024-04-18 09:47:30.512761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66145 ] 00:19:07.305 [2024-04-18 09:47:30.678216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.881 [2024-04-18 09:47:30.959384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.515  Copying: 65/65 [MB] (average 1031 MBps) 00:19:09.515 00:19:09.515 09:47:32 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:19:09.515 09:47:32 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:09.515 09:47:32 -- dd/common.sh@31 -- # xtrace_disable 00:19:09.515 09:47:32 -- common/autotest_common.sh@10 -- # set +x 00:19:09.515 { 00:19:09.515 "subsystems": [ 00:19:09.515 { 00:19:09.515 "subsystem": "bdev", 00:19:09.515 "config": [ 00:19:09.515 { 00:19:09.515 "params": { 00:19:09.515 "trtype": "pcie", 00:19:09.515 "traddr": "0000:00:10.0", 00:19:09.515 "name": "Nvme0" 00:19:09.515 }, 00:19:09.515 "method": "bdev_nvme_attach_controller" 00:19:09.515 }, 00:19:09.515 { 00:19:09.515 "params": { 00:19:09.515 "trtype": "pcie", 00:19:09.515 "traddr": "0000:00:11.0", 00:19:09.515 "name": "Nvme1" 00:19:09.515 }, 00:19:09.515 "method": "bdev_nvme_attach_controller" 00:19:09.515 }, 00:19:09.515 { 00:19:09.515 "method": "bdev_wait_for_examine" 00:19:09.515 } 00:19:09.515 ] 00:19:09.515 } 00:19:09.515 ] 00:19:09.515 } 00:19:09.515 [2024-04-18 09:47:32.715078] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:09.515 [2024-04-18 09:47:32.715287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66183 ] 00:19:09.515 [2024-04-18 09:47:32.888808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.774 [2024-04-18 09:47:33.126757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.716  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:11.716 00:19:11.717 09:47:34 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:11.717 ************************************ 00:19:11.717 END TEST dd_offset_magic 00:19:11.717 ************************************ 00:19:11.717 09:47:34 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:11.717 00:19:11.717 real 0m8.558s 00:19:11.717 user 0m7.193s 00:19:11.717 sys 0m2.654s 00:19:11.717 09:47:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.717 09:47:34 -- common/autotest_common.sh@10 -- # set +x 00:19:11.717 09:47:34 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:19:11.717 09:47:34 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:19:11.717 09:47:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:11.717 09:47:34 -- dd/common.sh@11 -- # local nvme_ref= 00:19:11.717 09:47:34 -- dd/common.sh@12 -- # local size=4194330 00:19:11.717 09:47:34 -- dd/common.sh@14 -- # local bs=1048576 00:19:11.717 09:47:34 -- dd/common.sh@15 -- # local count=5 00:19:11.717 09:47:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:19:11.717 09:47:34 -- dd/common.sh@18 -- # gen_conf 00:19:11.717 09:47:34 -- dd/common.sh@31 -- # xtrace_disable 00:19:11.717 09:47:34 -- common/autotest_common.sh@10 -- # set +x 00:19:11.717 { 00:19:11.717 "subsystems": [ 00:19:11.717 { 00:19:11.717 "subsystem": "bdev", 00:19:11.717 "config": [ 00:19:11.717 { 00:19:11.717 "params": { 00:19:11.717 "trtype": "pcie", 00:19:11.717 "traddr": "0000:00:10.0", 00:19:11.717 "name": "Nvme0" 00:19:11.717 }, 00:19:11.717 "method": "bdev_nvme_attach_controller" 00:19:11.717 }, 00:19:11.717 { 00:19:11.717 "params": { 00:19:11.717 "trtype": "pcie", 00:19:11.717 "traddr": "0000:00:11.0", 00:19:11.717 "name": "Nvme1" 00:19:11.717 }, 00:19:11.717 "method": "bdev_nvme_attach_controller" 00:19:11.717 }, 00:19:11.717 { 00:19:11.717 "method": "bdev_wait_for_examine" 00:19:11.717 } 00:19:11.717 ] 00:19:11.717 } 00:19:11.717 ] 00:19:11.717 } 00:19:11.717 [2024-04-18 09:47:34.886023] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:11.717 [2024-04-18 09:47:34.886497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66232 ] 00:19:11.717 [2024-04-18 09:47:35.062326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.975 [2024-04-18 09:47:35.306080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.474  Copying: 5120/5120 [kB] (average 1250 MBps) 00:19:13.474 00:19:13.474 09:47:36 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:19:13.474 09:47:36 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:19:13.474 09:47:36 -- dd/common.sh@11 -- # local nvme_ref= 00:19:13.474 09:47:36 -- dd/common.sh@12 -- # local size=4194330 00:19:13.474 09:47:36 -- dd/common.sh@14 -- # local bs=1048576 00:19:13.474 09:47:36 -- dd/common.sh@15 -- # local count=5 00:19:13.474 09:47:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:19:13.474 09:47:36 -- dd/common.sh@18 -- # gen_conf 00:19:13.474 09:47:36 -- dd/common.sh@31 -- # xtrace_disable 00:19:13.474 09:47:36 -- common/autotest_common.sh@10 -- # set +x 00:19:13.735 { 00:19:13.735 "subsystems": [ 00:19:13.735 { 00:19:13.735 "subsystem": "bdev", 00:19:13.735 "config": [ 00:19:13.735 { 00:19:13.735 "params": { 00:19:13.735 "trtype": "pcie", 00:19:13.735 "traddr": "0000:00:10.0", 00:19:13.735 "name": "Nvme0" 00:19:13.735 }, 00:19:13.735 "method": "bdev_nvme_attach_controller" 00:19:13.735 }, 00:19:13.735 { 00:19:13.735 "params": { 00:19:13.735 "trtype": "pcie", 00:19:13.735 "traddr": "0000:00:11.0", 00:19:13.735 "name": "Nvme1" 00:19:13.735 }, 00:19:13.735 "method": "bdev_nvme_attach_controller" 00:19:13.735 }, 00:19:13.735 { 00:19:13.735 "method": "bdev_wait_for_examine" 00:19:13.735 } 00:19:13.735 ] 00:19:13.735 } 00:19:13.735 ] 00:19:13.735 } 00:19:13.735 [2024-04-18 09:47:36.969191] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:13.735 [2024-04-18 09:47:36.969423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66265 ] 00:19:13.998 [2024-04-18 09:47:37.154869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.261 [2024-04-18 09:47:37.455187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.229  Copying: 5120/5120 [kB] (average 714 MBps) 00:19:16.229 00:19:16.229 09:47:39 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:19:16.229 ************************************ 00:19:16.229 END TEST spdk_dd_bdev_to_bdev 00:19:16.229 ************************************ 00:19:16.229 00:19:16.229 real 0m18.855s 00:19:16.229 user 0m15.642s 00:19:16.229 sys 0m8.408s 00:19:16.229 09:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:16.229 09:47:39 -- common/autotest_common.sh@10 -- # set +x 00:19:16.229 09:47:39 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:19:16.229 09:47:39 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:19:16.229 09:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:16.229 09:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.229 09:47:39 -- common/autotest_common.sh@10 -- # set +x 00:19:16.229 ************************************ 00:19:16.229 START TEST spdk_dd_uring 00:19:16.229 ************************************ 00:19:16.229 09:47:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:19:16.229 * Looking for test storage... 00:19:16.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:16.229 09:47:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.229 09:47:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.229 09:47:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.229 09:47:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.229 09:47:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.229 09:47:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.229 09:47:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.229 09:47:39 -- paths/export.sh@5 -- # export PATH 00:19:16.229 09:47:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.229 09:47:39 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:19:16.229 09:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:16.229 09:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.229 09:47:39 -- common/autotest_common.sh@10 -- # set +x 00:19:16.229 ************************************ 00:19:16.229 START TEST dd_uring_copy 00:19:16.229 ************************************ 00:19:16.229 09:47:39 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:19:16.229 09:47:39 -- dd/uring.sh@15 -- # local zram_dev_id 00:19:16.229 09:47:39 -- dd/uring.sh@16 -- # local magic 00:19:16.229 09:47:39 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:19:16.229 09:47:39 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:16.229 09:47:39 -- dd/uring.sh@19 -- # local verify_magic 00:19:16.229 09:47:39 -- dd/uring.sh@21 -- # init_zram 00:19:16.229 09:47:39 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:19:16.229 09:47:39 -- dd/common.sh@164 -- # return 00:19:16.229 09:47:39 -- dd/uring.sh@22 -- # create_zram_dev 00:19:16.229 09:47:39 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:19:16.229 09:47:39 -- dd/uring.sh@22 -- # zram_dev_id=1 00:19:16.229 09:47:39 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:19:16.229 09:47:39 -- dd/common.sh@181 -- # local id=1 00:19:16.229 09:47:39 -- dd/common.sh@182 -- # local size=512M 00:19:16.229 09:47:39 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:19:16.229 09:47:39 -- dd/common.sh@186 -- # echo 512M 00:19:16.229 09:47:39 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:19:16.229 09:47:39 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:19:16.229 09:47:39 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:19:16.229 09:47:39 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:19:16.229 09:47:39 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:19:16.229 09:47:39 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:19:16.229 09:47:39 -- dd/uring.sh@41 -- # gen_bytes 1024 00:19:16.229 09:47:39 -- dd/common.sh@98 -- # xtrace_disable 00:19:16.229 09:47:39 -- common/autotest_common.sh@10 -- # set +x 00:19:16.229 09:47:39 -- dd/uring.sh@41 -- # magic=8txml9zasbbdac4c13b9bxdw5p23f3onbh8huahgyvwttyiysnfr3pdsg53i5nx9prltz6skqd0knaeaayqxd4fk7qpmngsy3u4zt5tgoyiingmutsrewjx9govgoli4301rtiem3jl55jqkojy6nmesuoufvr4dkm2eihaycl9z8jehjpts9ojgq0842z5ifhc62ej95hci2flplghvtqquv4vzwtsta3jcr1tdrvh3uvpquwkb4z76u2mhmlm4l0k84d35pt7qj14iqexzytmxk38h9padw6rltufst4yeszvdo7dkprmv1jv7zjgo0bv3l25kpvjn1eufyx4jy0w5itqlwqo6d53ri5ytuuwfbp3uvhx4sebbmyfvwlwtsbcvpkim360q6n7h5t3zgz5e2a8ckea924j1x2g50cuspfryehbybeoy86hst3unscggr63uzos5pq0t86dn4jqyp9lqodzz1xp68s2m20hagdhnse5tb3nve1lct8ur4mw34cskwgdi6r5585vxpqk4t52awch7ndglxm8y7aa5k8zg4mtii2qd3kaarbdkfytbl5ynukgirdidkw1y16ju6vakq0l2jw1pcgi7oox45z7xihzcu5u1guaj17m5kwm4hhbxb5g2mrw3fpzb7c5cin8hmra8iin6izm2dhm9f3p8qy6w2gkzk8tim3cjioej0ybgft49pl705ynf6x6ne42ql2w4y6vsugeefvqpjgpg1huo419iaidaomz6z59c6pc5v0228239445dlzvh69dwb0t45w9dykdip768r69ov6gqb23bej3r0b2235fehdu4a36o11eylremgb5t46k97lsqhdhqtg04urx9fyc3yhlk0qyhtpay91rv3aydh39b425evj8xlb126swmeodsn28p2hwg3aqs1ltgj7unsb0ht9b282ypxlxiux3nsng9fa06zbw3nb4kspioo3ff9i1k4h2d7npvesejv6hyutht3pyrl5uy1uli 00:19:16.229 09:47:39 -- dd/uring.sh@42 -- # echo 8txml9zasbbdac4c13b9bxdw5p23f3onbh8huahgyvwttyiysnfr3pdsg53i5nx9prltz6skqd0knaeaayqxd4fk7qpmngsy3u4zt5tgoyiingmutsrewjx9govgoli4301rtiem3jl55jqkojy6nmesuoufvr4dkm2eihaycl9z8jehjpts9ojgq0842z5ifhc62ej95hci2flplghvtqquv4vzwtsta3jcr1tdrvh3uvpquwkb4z76u2mhmlm4l0k84d35pt7qj14iqexzytmxk38h9padw6rltufst4yeszvdo7dkprmv1jv7zjgo0bv3l25kpvjn1eufyx4jy0w5itqlwqo6d53ri5ytuuwfbp3uvhx4sebbmyfvwlwtsbcvpkim360q6n7h5t3zgz5e2a8ckea924j1x2g50cuspfryehbybeoy86hst3unscggr63uzos5pq0t86dn4jqyp9lqodzz1xp68s2m20hagdhnse5tb3nve1lct8ur4mw34cskwgdi6r5585vxpqk4t52awch7ndglxm8y7aa5k8zg4mtii2qd3kaarbdkfytbl5ynukgirdidkw1y16ju6vakq0l2jw1pcgi7oox45z7xihzcu5u1guaj17m5kwm4hhbxb5g2mrw3fpzb7c5cin8hmra8iin6izm2dhm9f3p8qy6w2gkzk8tim3cjioej0ybgft49pl705ynf6x6ne42ql2w4y6vsugeefvqpjgpg1huo419iaidaomz6z59c6pc5v0228239445dlzvh69dwb0t45w9dykdip768r69ov6gqb23bej3r0b2235fehdu4a36o11eylremgb5t46k97lsqhdhqtg04urx9fyc3yhlk0qyhtpay91rv3aydh39b425evj8xlb126swmeodsn28p2hwg3aqs1ltgj7unsb0ht9b282ypxlxiux3nsng9fa06zbw3nb4kspioo3ff9i1k4h2d7npvesejv6hyutht3pyrl5uy1uli 00:19:16.229 09:47:39 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:19:16.489 [2024-04-18 09:47:39.639200] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:16.489 [2024-04-18 09:47:39.639847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66356 ] 00:19:16.489 [2024-04-18 09:47:39.817560] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.747 [2024-04-18 09:47:40.055702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.700  Copying: 511/511 [MB] (average 1458 MBps) 00:19:20.700 00:19:20.700 09:47:43 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:19:20.700 09:47:43 -- dd/uring.sh@54 -- # gen_conf 00:19:20.700 09:47:43 -- dd/common.sh@31 -- # xtrace_disable 00:19:20.700 09:47:43 -- common/autotest_common.sh@10 -- # set +x 00:19:20.700 { 00:19:20.700 "subsystems": [ 00:19:20.700 { 00:19:20.700 "subsystem": "bdev", 00:19:20.700 "config": [ 00:19:20.700 { 00:19:20.700 "params": { 00:19:20.700 "block_size": 512, 00:19:20.700 "num_blocks": 1048576, 00:19:20.700 "name": "malloc0" 00:19:20.700 }, 00:19:20.700 "method": "bdev_malloc_create" 00:19:20.700 }, 00:19:20.700 { 00:19:20.700 "params": { 00:19:20.700 "filename": "/dev/zram1", 00:19:20.700 "name": "uring0" 00:19:20.700 }, 00:19:20.700 "method": "bdev_uring_create" 00:19:20.700 }, 00:19:20.700 { 00:19:20.700 "method": "bdev_wait_for_examine" 00:19:20.700 } 00:19:20.700 ] 00:19:20.700 } 00:19:20.700 ] 00:19:20.700 } 00:19:20.700 [2024-04-18 09:47:43.790006] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:20.700 [2024-04-18 09:47:43.790203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66406 ] 00:19:20.700 [2024-04-18 09:47:43.964942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.018 [2024-04-18 09:47:44.258973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.326  Copying: 185/512 [MB] (185 MBps) Copying: 372/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 186 MBps) 00:19:27.326 00:19:27.326 09:47:50 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:19:27.326 09:47:50 -- dd/uring.sh@60 -- # gen_conf 00:19:27.326 09:47:50 -- dd/common.sh@31 -- # xtrace_disable 00:19:27.326 09:47:50 -- common/autotest_common.sh@10 -- # set +x 00:19:27.326 { 00:19:27.326 "subsystems": [ 00:19:27.326 { 00:19:27.326 "subsystem": "bdev", 00:19:27.326 "config": [ 00:19:27.326 { 00:19:27.326 "params": { 00:19:27.326 "block_size": 512, 00:19:27.326 "num_blocks": 1048576, 00:19:27.326 "name": "malloc0" 00:19:27.326 }, 00:19:27.326 "method": "bdev_malloc_create" 00:19:27.326 }, 00:19:27.326 { 00:19:27.326 "params": { 00:19:27.326 "filename": "/dev/zram1", 00:19:27.326 "name": "uring0" 00:19:27.326 }, 00:19:27.326 "method": "bdev_uring_create" 00:19:27.326 }, 00:19:27.326 { 00:19:27.326 "method": "bdev_wait_for_examine" 00:19:27.326 } 00:19:27.326 ] 00:19:27.326 } 00:19:27.326 ] 00:19:27.326 } 00:19:27.326 [2024-04-18 09:47:50.439296] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:27.326 [2024-04-18 09:47:50.439514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66492 ] 00:19:27.326 [2024-04-18 09:47:50.610087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.591 [2024-04-18 09:47:50.873876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.831  Copying: 139/512 [MB] (139 MBps) Copying: 286/512 [MB] (146 MBps) Copying: 436/512 [MB] (149 MBps) Copying: 512/512 [MB] (average 142 MBps) 00:19:34.831 00:19:34.831 09:47:57 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:19:34.831 09:47:57 -- dd/uring.sh@66 -- # [[ 8txml9zasbbdac4c13b9bxdw5p23f3onbh8huahgyvwttyiysnfr3pdsg53i5nx9prltz6skqd0knaeaayqxd4fk7qpmngsy3u4zt5tgoyiingmutsrewjx9govgoli4301rtiem3jl55jqkojy6nmesuoufvr4dkm2eihaycl9z8jehjpts9ojgq0842z5ifhc62ej95hci2flplghvtqquv4vzwtsta3jcr1tdrvh3uvpquwkb4z76u2mhmlm4l0k84d35pt7qj14iqexzytmxk38h9padw6rltufst4yeszvdo7dkprmv1jv7zjgo0bv3l25kpvjn1eufyx4jy0w5itqlwqo6d53ri5ytuuwfbp3uvhx4sebbmyfvwlwtsbcvpkim360q6n7h5t3zgz5e2a8ckea924j1x2g50cuspfryehbybeoy86hst3unscggr63uzos5pq0t86dn4jqyp9lqodzz1xp68s2m20hagdhnse5tb3nve1lct8ur4mw34cskwgdi6r5585vxpqk4t52awch7ndglxm8y7aa5k8zg4mtii2qd3kaarbdkfytbl5ynukgirdidkw1y16ju6vakq0l2jw1pcgi7oox45z7xihzcu5u1guaj17m5kwm4hhbxb5g2mrw3fpzb7c5cin8hmra8iin6izm2dhm9f3p8qy6w2gkzk8tim3cjioej0ybgft49pl705ynf6x6ne42ql2w4y6vsugeefvqpjgpg1huo419iaidaomz6z59c6pc5v0228239445dlzvh69dwb0t45w9dykdip768r69ov6gqb23bej3r0b2235fehdu4a36o11eylremgb5t46k97lsqhdhqtg04urx9fyc3yhlk0qyhtpay91rv3aydh39b425evj8xlb126swmeodsn28p2hwg3aqs1ltgj7unsb0ht9b282ypxlxiux3nsng9fa06zbw3nb4kspioo3ff9i1k4h2d7npvesejv6hyutht3pyrl5uy1uli == \8\t\x\m\l\9\z\a\s\b\b\d\a\c\4\c\1\3\b\9\b\x\d\w\5\p\2\3\f\3\o\n\b\h\8\h\u\a\h\g\y\v\w\t\t\y\i\y\s\n\f\r\3\p\d\s\g\5\3\i\5\n\x\9\p\r\l\t\z\6\s\k\q\d\0\k\n\a\e\a\a\y\q\x\d\4\f\k\7\q\p\m\n\g\s\y\3\u\4\z\t\5\t\g\o\y\i\i\n\g\m\u\t\s\r\e\w\j\x\9\g\o\v\g\o\l\i\4\3\0\1\r\t\i\e\m\3\j\l\5\5\j\q\k\o\j\y\6\n\m\e\s\u\o\u\f\v\r\4\d\k\m\2\e\i\h\a\y\c\l\9\z\8\j\e\h\j\p\t\s\9\o\j\g\q\0\8\4\2\z\5\i\f\h\c\6\2\e\j\9\5\h\c\i\2\f\l\p\l\g\h\v\t\q\q\u\v\4\v\z\w\t\s\t\a\3\j\c\r\1\t\d\r\v\h\3\u\v\p\q\u\w\k\b\4\z\7\6\u\2\m\h\m\l\m\4\l\0\k\8\4\d\3\5\p\t\7\q\j\1\4\i\q\e\x\z\y\t\m\x\k\3\8\h\9\p\a\d\w\6\r\l\t\u\f\s\t\4\y\e\s\z\v\d\o\7\d\k\p\r\m\v\1\j\v\7\z\j\g\o\0\b\v\3\l\2\5\k\p\v\j\n\1\e\u\f\y\x\4\j\y\0\w\5\i\t\q\l\w\q\o\6\d\5\3\r\i\5\y\t\u\u\w\f\b\p\3\u\v\h\x\4\s\e\b\b\m\y\f\v\w\l\w\t\s\b\c\v\p\k\i\m\3\6\0\q\6\n\7\h\5\t\3\z\g\z\5\e\2\a\8\c\k\e\a\9\2\4\j\1\x\2\g\5\0\c\u\s\p\f\r\y\e\h\b\y\b\e\o\y\8\6\h\s\t\3\u\n\s\c\g\g\r\6\3\u\z\o\s\5\p\q\0\t\8\6\d\n\4\j\q\y\p\9\l\q\o\d\z\z\1\x\p\6\8\s\2\m\2\0\h\a\g\d\h\n\s\e\5\t\b\3\n\v\e\1\l\c\t\8\u\r\4\m\w\3\4\c\s\k\w\g\d\i\6\r\5\5\8\5\v\x\p\q\k\4\t\5\2\a\w\c\h\7\n\d\g\l\x\m\8\y\7\a\a\5\k\8\z\g\4\m\t\i\i\2\q\d\3\k\a\a\r\b\d\k\f\y\t\b\l\5\y\n\u\k\g\i\r\d\i\d\k\w\1\y\1\6\j\u\6\v\a\k\q\0\l\2\j\w\1\p\c\g\i\7\o\o\x\4\5\z\7\x\i\h\z\c\u\5\u\1\g\u\a\j\1\7\m\5\k\w\m\4\h\h\b\x\b\5\g\2\m\r\w\3\f\p\z\b\7\c\5\c\i\n\8\h\m\r\a\8\i\i\n\6\i\z\m\2\d\h\m\9\f\3\p\8\q\y\6\w\2\g\k\z\k\8\t\i\m\3\c\j\i\o\e\j\0\y\b\g\f\t\4\9\p\l\7\0\5\y\n\f\6\x\6\n\e\4\2\q\l\2\w\4\y\6\v\s\u\g\e\e\f\v\q\p\j\g\p\g\1\h\u\o\4\1\9\i\a\i\d\a\o\m\z\6\z\5\9\c\6\p\c\5\v\0\2\2\8\2\3\9\4\4\5\d\l\z\v\h\6\9\d\w\b\0\t\4\5\w\9\d\y\k\d\i\p\7\6\8\r\6\9\o\v\6\g\q\b\2\3\b\e\j\3\r\0\b\2\2\3\5\f\e\h\d\u\4\a\3\6\o\1\1\e\y\l\r\e\m\g\b\5\t\4\6\k\9\7\l\s\q\h\d\h\q\t\g\0\4\u\r\x\9\f\y\c\3\y\h\l\k\0\q\y\h\t\p\a\y\9\1\r\v\3\a\y\d\h\3\9\b\4\2\5\e\v\j\8\x\l\b\1\2\6\s\w\m\e\o\d\s\n\2\8\p\2\h\w\g\3\a\q\s\1\l\t\g\j\7\u\n\s\b\0\h\t\9\b\2\8\2\y\p\x\l\x\i\u\x\3\n\s\n\g\9\f\a\0\6\z\b\w\3\n\b\4\k\s\p\i\o\o\3\f\f\9\i\1\k\4\h\2\d\7\n\p\v\e\s\e\j\v\6\h\y\u\t\h\t\3\p\y\r\l\5\u\y\1\u\l\i ]] 00:19:34.831 09:47:57 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:19:34.831 09:47:57 -- dd/uring.sh@69 -- # [[ 8txml9zasbbdac4c13b9bxdw5p23f3onbh8huahgyvwttyiysnfr3pdsg53i5nx9prltz6skqd0knaeaayqxd4fk7qpmngsy3u4zt5tgoyiingmutsrewjx9govgoli4301rtiem3jl55jqkojy6nmesuoufvr4dkm2eihaycl9z8jehjpts9ojgq0842z5ifhc62ej95hci2flplghvtqquv4vzwtsta3jcr1tdrvh3uvpquwkb4z76u2mhmlm4l0k84d35pt7qj14iqexzytmxk38h9padw6rltufst4yeszvdo7dkprmv1jv7zjgo0bv3l25kpvjn1eufyx4jy0w5itqlwqo6d53ri5ytuuwfbp3uvhx4sebbmyfvwlwtsbcvpkim360q6n7h5t3zgz5e2a8ckea924j1x2g50cuspfryehbybeoy86hst3unscggr63uzos5pq0t86dn4jqyp9lqodzz1xp68s2m20hagdhnse5tb3nve1lct8ur4mw34cskwgdi6r5585vxpqk4t52awch7ndglxm8y7aa5k8zg4mtii2qd3kaarbdkfytbl5ynukgirdidkw1y16ju6vakq0l2jw1pcgi7oox45z7xihzcu5u1guaj17m5kwm4hhbxb5g2mrw3fpzb7c5cin8hmra8iin6izm2dhm9f3p8qy6w2gkzk8tim3cjioej0ybgft49pl705ynf6x6ne42ql2w4y6vsugeefvqpjgpg1huo419iaidaomz6z59c6pc5v0228239445dlzvh69dwb0t45w9dykdip768r69ov6gqb23bej3r0b2235fehdu4a36o11eylremgb5t46k97lsqhdhqtg04urx9fyc3yhlk0qyhtpay91rv3aydh39b425evj8xlb126swmeodsn28p2hwg3aqs1ltgj7unsb0ht9b282ypxlxiux3nsng9fa06zbw3nb4kspioo3ff9i1k4h2d7npvesejv6hyutht3pyrl5uy1uli == \8\t\x\m\l\9\z\a\s\b\b\d\a\c\4\c\1\3\b\9\b\x\d\w\5\p\2\3\f\3\o\n\b\h\8\h\u\a\h\g\y\v\w\t\t\y\i\y\s\n\f\r\3\p\d\s\g\5\3\i\5\n\x\9\p\r\l\t\z\6\s\k\q\d\0\k\n\a\e\a\a\y\q\x\d\4\f\k\7\q\p\m\n\g\s\y\3\u\4\z\t\5\t\g\o\y\i\i\n\g\m\u\t\s\r\e\w\j\x\9\g\o\v\g\o\l\i\4\3\0\1\r\t\i\e\m\3\j\l\5\5\j\q\k\o\j\y\6\n\m\e\s\u\o\u\f\v\r\4\d\k\m\2\e\i\h\a\y\c\l\9\z\8\j\e\h\j\p\t\s\9\o\j\g\q\0\8\4\2\z\5\i\f\h\c\6\2\e\j\9\5\h\c\i\2\f\l\p\l\g\h\v\t\q\q\u\v\4\v\z\w\t\s\t\a\3\j\c\r\1\t\d\r\v\h\3\u\v\p\q\u\w\k\b\4\z\7\6\u\2\m\h\m\l\m\4\l\0\k\8\4\d\3\5\p\t\7\q\j\1\4\i\q\e\x\z\y\t\m\x\k\3\8\h\9\p\a\d\w\6\r\l\t\u\f\s\t\4\y\e\s\z\v\d\o\7\d\k\p\r\m\v\1\j\v\7\z\j\g\o\0\b\v\3\l\2\5\k\p\v\j\n\1\e\u\f\y\x\4\j\y\0\w\5\i\t\q\l\w\q\o\6\d\5\3\r\i\5\y\t\u\u\w\f\b\p\3\u\v\h\x\4\s\e\b\b\m\y\f\v\w\l\w\t\s\b\c\v\p\k\i\m\3\6\0\q\6\n\7\h\5\t\3\z\g\z\5\e\2\a\8\c\k\e\a\9\2\4\j\1\x\2\g\5\0\c\u\s\p\f\r\y\e\h\b\y\b\e\o\y\8\6\h\s\t\3\u\n\s\c\g\g\r\6\3\u\z\o\s\5\p\q\0\t\8\6\d\n\4\j\q\y\p\9\l\q\o\d\z\z\1\x\p\6\8\s\2\m\2\0\h\a\g\d\h\n\s\e\5\t\b\3\n\v\e\1\l\c\t\8\u\r\4\m\w\3\4\c\s\k\w\g\d\i\6\r\5\5\8\5\v\x\p\q\k\4\t\5\2\a\w\c\h\7\n\d\g\l\x\m\8\y\7\a\a\5\k\8\z\g\4\m\t\i\i\2\q\d\3\k\a\a\r\b\d\k\f\y\t\b\l\5\y\n\u\k\g\i\r\d\i\d\k\w\1\y\1\6\j\u\6\v\a\k\q\0\l\2\j\w\1\p\c\g\i\7\o\o\x\4\5\z\7\x\i\h\z\c\u\5\u\1\g\u\a\j\1\7\m\5\k\w\m\4\h\h\b\x\b\5\g\2\m\r\w\3\f\p\z\b\7\c\5\c\i\n\8\h\m\r\a\8\i\i\n\6\i\z\m\2\d\h\m\9\f\3\p\8\q\y\6\w\2\g\k\z\k\8\t\i\m\3\c\j\i\o\e\j\0\y\b\g\f\t\4\9\p\l\7\0\5\y\n\f\6\x\6\n\e\4\2\q\l\2\w\4\y\6\v\s\u\g\e\e\f\v\q\p\j\g\p\g\1\h\u\o\4\1\9\i\a\i\d\a\o\m\z\6\z\5\9\c\6\p\c\5\v\0\2\2\8\2\3\9\4\4\5\d\l\z\v\h\6\9\d\w\b\0\t\4\5\w\9\d\y\k\d\i\p\7\6\8\r\6\9\o\v\6\g\q\b\2\3\b\e\j\3\r\0\b\2\2\3\5\f\e\h\d\u\4\a\3\6\o\1\1\e\y\l\r\e\m\g\b\5\t\4\6\k\9\7\l\s\q\h\d\h\q\t\g\0\4\u\r\x\9\f\y\c\3\y\h\l\k\0\q\y\h\t\p\a\y\9\1\r\v\3\a\y\d\h\3\9\b\4\2\5\e\v\j\8\x\l\b\1\2\6\s\w\m\e\o\d\s\n\2\8\p\2\h\w\g\3\a\q\s\1\l\t\g\j\7\u\n\s\b\0\h\t\9\b\2\8\2\y\p\x\l\x\i\u\x\3\n\s\n\g\9\f\a\0\6\z\b\w\3\n\b\4\k\s\p\i\o\o\3\f\f\9\i\1\k\4\h\2\d\7\n\p\v\e\s\e\j\v\6\h\y\u\t\h\t\3\p\y\r\l\5\u\y\1\u\l\i ]] 00:19:34.831 09:47:57 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:34.831 09:47:58 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:19:34.831 09:47:58 -- dd/uring.sh@75 -- # gen_conf 00:19:34.831 09:47:58 -- dd/common.sh@31 -- # xtrace_disable 00:19:34.831 09:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:34.831 { 00:19:34.831 "subsystems": [ 00:19:34.831 { 00:19:34.831 "subsystem": "bdev", 00:19:34.831 "config": [ 00:19:34.831 { 00:19:34.831 "params": { 00:19:34.831 "block_size": 512, 00:19:34.831 "num_blocks": 1048576, 00:19:34.831 "name": "malloc0" 00:19:34.831 }, 00:19:34.831 "method": "bdev_malloc_create" 00:19:34.831 }, 00:19:34.831 { 00:19:34.831 "params": { 00:19:34.831 "filename": "/dev/zram1", 00:19:34.831 "name": "uring0" 00:19:34.831 }, 00:19:34.831 "method": "bdev_uring_create" 00:19:34.831 }, 00:19:34.831 { 00:19:34.831 "method": "bdev_wait_for_examine" 00:19:34.831 } 00:19:34.831 ] 00:19:34.831 } 00:19:34.831 ] 00:19:34.831 } 00:19:34.831 [2024-04-18 09:47:58.192288] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:34.831 [2024-04-18 09:47:58.192622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66614 ] 00:19:35.090 [2024-04-18 09:47:58.357589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.348 [2024-04-18 09:47:58.594873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.156  Copying: 119/512 [MB] (119 MBps) Copying: 238/512 [MB] (118 MBps) Copying: 358/512 [MB] (120 MBps) Copying: 478/512 [MB] (119 MBps) Copying: 512/512 [MB] (average 119 MBps) 00:19:43.156 00:19:43.156 09:48:06 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:19:43.156 09:48:06 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:19:43.156 09:48:06 -- dd/uring.sh@87 -- # : 00:19:43.157 09:48:06 -- dd/uring.sh@87 -- # : 00:19:43.157 09:48:06 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:19:43.157 09:48:06 -- dd/uring.sh@87 -- # gen_conf 00:19:43.157 09:48:06 -- dd/common.sh@31 -- # xtrace_disable 00:19:43.157 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:43.157 { 00:19:43.157 "subsystems": [ 00:19:43.157 { 00:19:43.157 "subsystem": "bdev", 00:19:43.157 "config": [ 00:19:43.157 { 00:19:43.157 "params": { 00:19:43.157 "block_size": 512, 00:19:43.157 "num_blocks": 1048576, 00:19:43.157 "name": "malloc0" 00:19:43.157 }, 00:19:43.157 "method": "bdev_malloc_create" 00:19:43.157 }, 00:19:43.157 { 00:19:43.157 "params": { 00:19:43.157 "filename": "/dev/zram1", 00:19:43.157 "name": "uring0" 00:19:43.157 }, 00:19:43.157 "method": "bdev_uring_create" 00:19:43.157 }, 00:19:43.157 { 00:19:43.157 "params": { 00:19:43.157 "name": "uring0" 00:19:43.157 }, 00:19:43.157 "method": "bdev_uring_delete" 00:19:43.157 }, 00:19:43.157 { 00:19:43.157 "method": "bdev_wait_for_examine" 00:19:43.157 } 00:19:43.157 ] 00:19:43.157 } 00:19:43.157 ] 00:19:43.157 } 00:19:43.157 [2024-04-18 09:48:06.285288] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:43.157 [2024-04-18 09:48:06.285462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66712 ] 00:19:43.157 [2024-04-18 09:48:06.450208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.414 [2024-04-18 09:48:06.696891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.895  Copying: 0/0 [B] (average 0 Bps) 00:19:46.895 00:19:46.895 09:48:09 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:46.895 09:48:09 -- dd/uring.sh@94 -- # : 00:19:46.895 09:48:09 -- dd/uring.sh@94 -- # gen_conf 00:19:46.895 09:48:09 -- common/autotest_common.sh@638 -- # local es=0 00:19:46.895 09:48:09 -- dd/common.sh@31 -- # xtrace_disable 00:19:46.895 09:48:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:46.895 09:48:09 -- common/autotest_common.sh@10 -- # set +x 00:19:46.895 09:48:09 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.895 09:48:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.895 09:48:09 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.895 09:48:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.895 09:48:09 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.895 09:48:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.895 09:48:09 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.895 09:48:09 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:46.895 09:48:09 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:46.895 { 00:19:46.895 "subsystems": [ 00:19:46.895 { 00:19:46.895 "subsystem": "bdev", 00:19:46.895 "config": [ 00:19:46.895 { 00:19:46.895 "params": { 00:19:46.895 "block_size": 512, 00:19:46.895 "num_blocks": 1048576, 00:19:46.895 "name": "malloc0" 00:19:46.895 }, 00:19:46.895 "method": "bdev_malloc_create" 00:19:46.895 }, 00:19:46.895 { 00:19:46.895 "params": { 00:19:46.895 "filename": "/dev/zram1", 00:19:46.895 "name": "uring0" 00:19:46.895 }, 00:19:46.895 "method": "bdev_uring_create" 00:19:46.895 }, 00:19:46.895 { 00:19:46.895 "params": { 00:19:46.895 "name": "uring0" 00:19:46.895 }, 00:19:46.895 "method": "bdev_uring_delete" 00:19:46.895 }, 00:19:46.895 { 00:19:46.895 "method": "bdev_wait_for_examine" 00:19:46.895 } 00:19:46.895 ] 00:19:46.895 } 00:19:46.895 ] 00:19:46.895 } 00:19:46.895 [2024-04-18 09:48:10.098381] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:46.895 [2024-04-18 09:48:10.098574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66769 ] 00:19:46.895 [2024-04-18 09:48:10.275270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.155 [2024-04-18 09:48:10.524649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.097 [2024-04-18 09:48:11.397094] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:19:48.097 [2024-04-18 09:48:11.397205] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:19:48.097 [2024-04-18 09:48:11.397224] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:19:48.097 [2024-04-18 09:48:11.397245] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:50.630 [2024-04-18 09:48:13.454586] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:50.630 09:48:13 -- common/autotest_common.sh@641 -- # es=237 00:19:50.630 09:48:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:50.630 09:48:13 -- common/autotest_common.sh@650 -- # es=109 00:19:50.630 09:48:13 -- common/autotest_common.sh@651 -- # case "$es" in 00:19:50.630 09:48:13 -- common/autotest_common.sh@658 -- # es=1 00:19:50.630 09:48:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:50.630 09:48:13 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:19:50.630 09:48:13 -- dd/common.sh@172 -- # local id=1 00:19:50.630 09:48:13 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:19:50.630 09:48:13 -- dd/common.sh@176 -- # echo 1 00:19:50.630 09:48:13 -- dd/common.sh@177 -- # echo 1 00:19:50.630 09:48:13 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:50.889 00:19:50.889 real 0m34.606s 00:19:50.889 user 0m28.293s 00:19:50.889 sys 0m16.729s 00:19:50.889 09:48:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:50.889 09:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:50.889 ************************************ 00:19:50.889 END TEST dd_uring_copy 00:19:50.889 ************************************ 00:19:50.889 ************************************ 00:19:50.889 END TEST spdk_dd_uring 00:19:50.889 ************************************ 00:19:50.889 00:19:50.889 real 0m34.816s 00:19:50.889 user 0m28.373s 00:19:50.889 sys 0m16.847s 00:19:50.889 09:48:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:50.889 09:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:50.889 09:48:14 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:50.889 09:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:50.889 09:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:50.889 09:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.152 ************************************ 00:19:51.152 START TEST spdk_dd_sparse 00:19:51.152 ************************************ 00:19:51.152 09:48:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:51.152 * Looking for test storage... 00:19:51.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:51.152 09:48:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.152 09:48:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.152 09:48:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.152 09:48:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.152 09:48:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.152 09:48:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.152 09:48:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.152 09:48:14 -- paths/export.sh@5 -- # export PATH 00:19:51.152 09:48:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.152 09:48:14 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:19:51.152 09:48:14 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:19:51.152 09:48:14 -- dd/sparse.sh@110 -- # file1=file_zero1 00:19:51.152 09:48:14 -- dd/sparse.sh@111 -- # file2=file_zero2 00:19:51.152 09:48:14 -- dd/sparse.sh@112 -- # file3=file_zero3 00:19:51.152 09:48:14 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:19:51.152 09:48:14 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:19:51.152 09:48:14 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:19:51.152 09:48:14 -- dd/sparse.sh@118 -- # prepare 00:19:51.152 09:48:14 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:19:51.152 09:48:14 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:19:51.152 1+0 records in 00:19:51.152 1+0 records out 00:19:51.152 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00795793 s, 527 MB/s 00:19:51.152 09:48:14 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:19:51.152 1+0 records in 00:19:51.152 1+0 records out 00:19:51.152 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00661943 s, 634 MB/s 00:19:51.152 09:48:14 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:19:51.152 1+0 records in 00:19:51.152 1+0 records out 00:19:51.152 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00681942 s, 615 MB/s 00:19:51.152 09:48:14 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:19:51.152 09:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:51.152 09:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.152 09:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.152 ************************************ 00:19:51.152 START TEST dd_sparse_file_to_file 00:19:51.152 ************************************ 00:19:51.152 09:48:14 -- common/autotest_common.sh@1111 -- # file_to_file 00:19:51.152 09:48:14 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:19:51.152 09:48:14 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:19:51.152 09:48:14 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:51.152 09:48:14 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:19:51.152 09:48:14 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:19:51.152 09:48:14 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:19:51.152 09:48:14 -- dd/sparse.sh@41 -- # gen_conf 00:19:51.152 09:48:14 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:19:51.152 09:48:14 -- dd/common.sh@31 -- # xtrace_disable 00:19:51.152 09:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.413 { 00:19:51.413 "subsystems": [ 00:19:51.413 { 00:19:51.413 "subsystem": "bdev", 00:19:51.413 "config": [ 00:19:51.413 { 00:19:51.413 "params": { 00:19:51.413 "block_size": 4096, 00:19:51.413 "filename": "dd_sparse_aio_disk", 00:19:51.413 "name": "dd_aio" 00:19:51.413 }, 00:19:51.413 "method": "bdev_aio_create" 00:19:51.413 }, 00:19:51.413 { 00:19:51.413 "params": { 00:19:51.413 "lvs_name": "dd_lvstore", 00:19:51.413 "bdev_name": "dd_aio" 00:19:51.413 }, 00:19:51.413 "method": "bdev_lvol_create_lvstore" 00:19:51.413 }, 00:19:51.413 { 00:19:51.413 "method": "bdev_wait_for_examine" 00:19:51.413 } 00:19:51.413 ] 00:19:51.413 } 00:19:51.413 ] 00:19:51.413 } 00:19:51.413 [2024-04-18 09:48:14.590281] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:51.413 [2024-04-18 09:48:14.590479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66905 ] 00:19:51.413 [2024-04-18 09:48:14.765852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.672 [2024-04-18 09:48:15.059494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.632  Copying: 12/36 [MB] (average 1000 MBps) 00:19:53.632 00:19:53.632 09:48:16 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:19:53.632 09:48:16 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:19:53.632 09:48:16 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:19:53.632 09:48:16 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:19:53.632 09:48:16 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:53.632 09:48:16 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:19:53.632 09:48:16 -- dd/sparse.sh@52 -- # stat1_b=24576 00:19:53.632 09:48:16 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:19:53.632 09:48:16 -- dd/sparse.sh@53 -- # stat2_b=24576 00:19:53.632 09:48:16 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:53.632 00:19:53.632 real 0m2.251s 00:19:53.632 user 0m1.828s 00:19:53.632 sys 0m1.115s 00:19:53.632 ************************************ 00:19:53.632 END TEST dd_sparse_file_to_file 00:19:53.632 ************************************ 00:19:53.632 09:48:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:53.632 09:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.632 09:48:16 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:19:53.632 09:48:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:53.632 09:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.632 09:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.632 ************************************ 00:19:53.632 START TEST dd_sparse_file_to_bdev 00:19:53.632 ************************************ 00:19:53.632 09:48:16 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:19:53.632 09:48:16 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:53.632 09:48:16 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:19:53.632 09:48:16 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:19:53.632 09:48:16 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:19:53.632 09:48:16 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:19:53.632 09:48:16 -- dd/sparse.sh@73 -- # gen_conf 00:19:53.632 09:48:16 -- dd/common.sh@31 -- # xtrace_disable 00:19:53.632 09:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.632 { 00:19:53.632 "subsystems": [ 00:19:53.632 { 00:19:53.632 "subsystem": "bdev", 00:19:53.632 "config": [ 00:19:53.632 { 00:19:53.632 "params": { 00:19:53.632 "block_size": 4096, 00:19:53.632 "filename": "dd_sparse_aio_disk", 00:19:53.632 "name": "dd_aio" 00:19:53.632 }, 00:19:53.632 "method": "bdev_aio_create" 00:19:53.632 }, 00:19:53.632 { 00:19:53.632 "params": { 00:19:53.632 "lvs_name": "dd_lvstore", 00:19:53.632 "lvol_name": "dd_lvol", 00:19:53.632 "size": 37748736, 00:19:53.632 "thin_provision": true 00:19:53.632 }, 00:19:53.632 "method": "bdev_lvol_create" 00:19:53.632 }, 00:19:53.632 { 00:19:53.632 "method": "bdev_wait_for_examine" 00:19:53.632 } 00:19:53.632 ] 00:19:53.632 } 00:19:53.632 ] 00:19:53.632 } 00:19:53.632 [2024-04-18 09:48:16.977215] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:53.632 [2024-04-18 09:48:16.977385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66969 ] 00:19:53.890 [2024-04-18 09:48:17.150887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.149 [2024-04-18 09:48:17.442266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.407 [2024-04-18 09:48:17.777329] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:19:54.666  Copying: 12/36 [MB] (average 500 MBps)[2024-04-18 09:48:17.839510] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:19:56.044 00:19:56.044 00:19:56.044 ************************************ 00:19:56.044 END TEST dd_sparse_file_to_bdev 00:19:56.044 ************************************ 00:19:56.044 00:19:56.044 real 0m2.214s 00:19:56.044 user 0m1.844s 00:19:56.044 sys 0m1.104s 00:19:56.044 09:48:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:56.044 09:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:56.044 09:48:19 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:19:56.044 09:48:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:56.044 09:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.044 09:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:56.044 ************************************ 00:19:56.044 START TEST dd_sparse_bdev_to_file 00:19:56.044 ************************************ 00:19:56.044 09:48:19 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:19:56.044 09:48:19 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:19:56.044 09:48:19 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:19:56.044 09:48:19 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:56.044 09:48:19 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:19:56.044 09:48:19 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:19:56.044 09:48:19 -- dd/sparse.sh@91 -- # gen_conf 00:19:56.044 09:48:19 -- dd/common.sh@31 -- # xtrace_disable 00:19:56.044 09:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:56.044 { 00:19:56.044 "subsystems": [ 00:19:56.044 { 00:19:56.044 "subsystem": "bdev", 00:19:56.044 "config": [ 00:19:56.044 { 00:19:56.044 "params": { 00:19:56.044 "block_size": 4096, 00:19:56.044 "filename": "dd_sparse_aio_disk", 00:19:56.044 "name": "dd_aio" 00:19:56.044 }, 00:19:56.044 "method": "bdev_aio_create" 00:19:56.044 }, 00:19:56.044 { 00:19:56.044 "method": "bdev_wait_for_examine" 00:19:56.044 } 00:19:56.044 ] 00:19:56.044 } 00:19:56.044 ] 00:19:56.044 } 00:19:56.044 [2024-04-18 09:48:19.313946] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:56.044 [2024-04-18 09:48:19.314119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67023 ] 00:19:56.303 [2024-04-18 09:48:19.484279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.563 [2024-04-18 09:48:19.726983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.198  Copying: 12/36 [MB] (average 1000 MBps) 00:19:58.198 00:19:58.198 09:48:21 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:19:58.198 09:48:21 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:19:58.198 09:48:21 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:19:58.198 09:48:21 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:19:58.198 09:48:21 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:58.198 09:48:21 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:19:58.198 09:48:21 -- dd/sparse.sh@102 -- # stat2_b=24576 00:19:58.198 09:48:21 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:19:58.198 09:48:21 -- dd/sparse.sh@103 -- # stat3_b=24576 00:19:58.198 09:48:21 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:58.198 00:19:58.198 real 0m2.178s 00:19:58.198 user 0m1.813s 00:19:58.198 sys 0m1.096s 00:19:58.198 09:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:58.198 ************************************ 00:19:58.198 END TEST dd_sparse_bdev_to_file 00:19:58.198 ************************************ 00:19:58.198 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.198 09:48:21 -- dd/sparse.sh@1 -- # cleanup 00:19:58.198 09:48:21 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:19:58.198 09:48:21 -- dd/sparse.sh@12 -- # rm file_zero1 00:19:58.198 09:48:21 -- dd/sparse.sh@13 -- # rm file_zero2 00:19:58.198 09:48:21 -- dd/sparse.sh@14 -- # rm file_zero3 00:19:58.198 ************************************ 00:19:58.198 END TEST spdk_dd_sparse 00:19:58.198 ************************************ 00:19:58.198 00:19:58.198 real 0m7.163s 00:19:58.198 user 0m5.673s 00:19:58.198 sys 0m3.592s 00:19:58.198 09:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:58.198 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.198 09:48:21 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:58.198 09:48:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:58.198 09:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.198 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.198 ************************************ 00:19:58.198 START TEST spdk_dd_negative 00:19:58.198 ************************************ 00:19:58.198 09:48:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:58.460 * Looking for test storage... 00:19:58.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:58.460 09:48:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.460 09:48:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.460 09:48:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.460 09:48:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.460 09:48:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.460 09:48:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.460 09:48:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.460 09:48:21 -- paths/export.sh@5 -- # export PATH 00:19:58.460 09:48:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.460 09:48:21 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:58.460 09:48:21 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:58.460 09:48:21 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:58.460 09:48:21 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:58.460 09:48:21 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:19:58.460 09:48:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:58.460 09:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.460 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.460 ************************************ 00:19:58.460 START TEST dd_invalid_arguments 00:19:58.460 ************************************ 00:19:58.460 09:48:21 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:19:58.460 09:48:21 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:58.460 09:48:21 -- common/autotest_common.sh@638 -- # local es=0 00:19:58.460 09:48:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:58.460 09:48:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.460 09:48:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.460 09:48:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.460 09:48:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.460 09:48:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.460 09:48:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.460 09:48:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.460 09:48:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:58.460 09:48:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:58.720 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:19:58.720 00:19:58.720 CPU options: 00:19:58.720 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:19:58.720 (like [0,1,10]) 00:19:58.720 --lcores lcore to CPU mapping list. The list is in the format: 00:19:58.720 [<,lcores[@CPUs]>...] 00:19:58.720 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:19:58.720 Within the group, '-' is used for range separator, 00:19:58.720 ',' is used for single number separator. 00:19:58.720 '( )' can be omitted for single element group, 00:19:58.720 '@' can be omitted if cpus and lcores have the same value 00:19:58.720 --disable-cpumask-locks Disable CPU core lock files. 00:19:58.720 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:19:58.720 pollers in the app support interrupt mode) 00:19:58.720 -p, --main-core main (primary) core for DPDK 00:19:58.720 00:19:58.720 Configuration options: 00:19:58.720 -c, --config, --json JSON config file 00:19:58.720 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:19:58.720 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:19:58.720 --wait-for-rpc wait for RPCs to initialize subsystems 00:19:58.720 --rpcs-allowed comma-separated list of permitted RPCS 00:19:58.720 --json-ignore-init-errors don't exit on invalid config entry 00:19:58.720 00:19:58.720 Memory options: 00:19:58.720 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:19:58.720 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:19:58.720 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:19:58.720 -R, --huge-unlink unlink huge files after initialization 00:19:58.720 -n, --mem-channels number of memory channels used for DPDK 00:19:58.720 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:19:58.720 --msg-mempool-size global message memory pool size in count (default: 262143) 00:19:58.720 --no-huge run without using hugepages 00:19:58.720 -i, --shm-id shared memory ID (optional) 00:19:58.720 -g, --single-file-segments force creating just one hugetlbfs file 00:19:58.720 00:19:58.720 PCI options: 00:19:58.720 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:19:58.720 -B, --pci-blocked pci addr to block (can be used more than once) 00:19:58.720 -u, --no-pci disable PCI access 00:19:58.720 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:19:58.720 00:19:58.720 Log options: 00:19:58.720 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:19:58.720 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:19:58.720 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:19:58.720 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:19:58.720 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:19:58.720 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:19:58.720 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:19:58.720 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:19:58.720 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:19:58.720 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:19:58.720 virtio_vfio_user, vmd) 00:19:58.720 --silence-noticelog disable notice level logging to stderr 00:19:58.720 00:19:58.720 Trace options: 00:19:58.720 --num-trace-entries number of trace entries for each core, must be power of 2, 00:19:58.720 setting 0 to disable trace (default 32768) 00:19:58.720 Tracepoints vary in size and can use more than one trace entry. 00:19:58.720 -e, --tpoint-group [:] 00:19:58.720 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:19:58.720 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:19:58.720 [2024-04-18 09:48:21.864912] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:19:58.720 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:19:58.720 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:19:58.720 a tracepoint group. First tpoint inside a group can be enabled by 00:19:58.720 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:19:58.720 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:19:58.720 in /include/spdk_internal/trace_defs.h 00:19:58.720 00:19:58.720 Other options: 00:19:58.720 -h, --help show this usage 00:19:58.720 -v, --version print SPDK version 00:19:58.720 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:19:58.720 --env-context Opaque context for use of the env implementation 00:19:58.720 00:19:58.720 Application specific: 00:19:58.720 [--------- DD Options ---------] 00:19:58.720 --if Input file. Must specify either --if or --ib. 00:19:58.720 --ib Input bdev. Must specifier either --if or --ib 00:19:58.720 --of Output file. Must specify either --of or --ob. 00:19:58.720 --ob Output bdev. Must specify either --of or --ob. 00:19:58.720 --iflag Input file flags. 00:19:58.720 --oflag Output file flags. 00:19:58.720 --bs I/O unit size (default: 4096) 00:19:58.720 --qd Queue depth (default: 2) 00:19:58.720 --count I/O unit count. The number of I/O units to copy. (default: all) 00:19:58.720 --skip Skip this many I/O units at start of input. (default: 0) 00:19:58.720 --seek Skip this many I/O units at start of output. (default: 0) 00:19:58.720 --aio Force usage of AIO. (by default io_uring is used if available) 00:19:58.720 --sparse Enable hole skipping in input target 00:19:58.720 Available iflag and oflag values: 00:19:58.720 append - append mode 00:19:58.720 direct - use direct I/O for data 00:19:58.720 directory - fail unless a directory 00:19:58.720 dsync - use synchronized I/O for data 00:19:58.720 noatime - do not update access time 00:19:58.720 noctty - do not assign controlling terminal from file 00:19:58.720 nofollow - do not follow symlinks 00:19:58.720 nonblock - use non-blocking I/O 00:19:58.720 sync - use synchronized I/O for data and metadata 00:19:58.720 09:48:21 -- common/autotest_common.sh@641 -- # es=2 00:19:58.720 09:48:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:58.720 09:48:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:58.720 09:48:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:58.720 00:19:58.720 real 0m0.162s 00:19:58.720 user 0m0.088s 00:19:58.720 sys 0m0.073s 00:19:58.720 09:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:58.720 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.720 ************************************ 00:19:58.720 END TEST dd_invalid_arguments 00:19:58.720 ************************************ 00:19:58.720 09:48:21 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:19:58.720 09:48:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:58.720 09:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.720 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.720 ************************************ 00:19:58.720 START TEST dd_double_input 00:19:58.720 ************************************ 00:19:58.720 09:48:22 -- common/autotest_common.sh@1111 -- # double_input 00:19:58.720 09:48:22 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:58.720 09:48:22 -- common/autotest_common.sh@638 -- # local es=0 00:19:58.720 09:48:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:58.720 09:48:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.720 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.720 09:48:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.720 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.720 09:48:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.720 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.720 09:48:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.720 09:48:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:58.720 09:48:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:58.980 [2024-04-18 09:48:22.127263] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:19:58.980 09:48:22 -- common/autotest_common.sh@641 -- # es=22 00:19:58.980 09:48:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:58.980 09:48:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:58.980 09:48:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:58.980 00:19:58.980 real 0m0.133s 00:19:58.980 user 0m0.075s 00:19:58.980 sys 0m0.056s 00:19:58.980 09:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:58.980 ************************************ 00:19:58.980 END TEST dd_double_input 00:19:58.980 ************************************ 00:19:58.980 09:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:58.980 09:48:22 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:19:58.980 09:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:58.980 09:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.980 09:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:58.980 ************************************ 00:19:58.980 START TEST dd_double_output 00:19:58.980 ************************************ 00:19:58.980 09:48:22 -- common/autotest_common.sh@1111 -- # double_output 00:19:58.980 09:48:22 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:58.980 09:48:22 -- common/autotest_common.sh@638 -- # local es=0 00:19:58.980 09:48:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:58.980 09:48:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.980 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.980 09:48:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.980 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.980 09:48:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.980 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:58.980 09:48:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.980 09:48:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:58.980 09:48:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:59.239 [2024-04-18 09:48:22.401021] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:19:59.239 09:48:22 -- common/autotest_common.sh@641 -- # es=22 00:19:59.239 09:48:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:59.239 09:48:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:59.239 09:48:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:59.239 00:19:59.239 real 0m0.179s 00:19:59.239 user 0m0.095s 00:19:59.239 sys 0m0.082s 00:19:59.239 09:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.239 09:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:59.239 ************************************ 00:19:59.239 END TEST dd_double_output 00:19:59.239 ************************************ 00:19:59.239 09:48:22 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:19:59.239 09:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:59.239 09:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.239 09:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:59.239 ************************************ 00:19:59.239 START TEST dd_no_input 00:19:59.239 ************************************ 00:19:59.239 09:48:22 -- common/autotest_common.sh@1111 -- # no_input 00:19:59.239 09:48:22 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:59.239 09:48:22 -- common/autotest_common.sh@638 -- # local es=0 00:19:59.239 09:48:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:59.239 09:48:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.239 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.239 09:48:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.239 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.239 09:48:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.239 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.239 09:48:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.239 09:48:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:59.239 09:48:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:59.498 [2024-04-18 09:48:22.695851] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:19:59.498 09:48:22 -- common/autotest_common.sh@641 -- # es=22 00:19:59.498 09:48:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:59.498 09:48:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:59.498 09:48:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:59.498 00:19:59.498 real 0m0.161s 00:19:59.498 user 0m0.088s 00:19:59.498 sys 0m0.071s 00:19:59.498 09:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.498 09:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:59.498 ************************************ 00:19:59.498 END TEST dd_no_input 00:19:59.498 ************************************ 00:19:59.498 09:48:22 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:19:59.498 09:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:59.498 09:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.498 09:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:59.498 ************************************ 00:19:59.498 START TEST dd_no_output 00:19:59.498 ************************************ 00:19:59.498 09:48:22 -- common/autotest_common.sh@1111 -- # no_output 00:19:59.498 09:48:22 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:59.498 09:48:22 -- common/autotest_common.sh@638 -- # local es=0 00:19:59.498 09:48:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:59.498 09:48:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.498 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.498 09:48:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.498 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.498 09:48:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.498 09:48:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.498 09:48:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:59.498 09:48:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:59.498 09:48:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:59.757 [2024-04-18 09:48:22.983060] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:19:59.757 09:48:23 -- common/autotest_common.sh@641 -- # es=22 00:19:59.757 09:48:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:59.757 09:48:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:59.757 09:48:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:59.757 00:19:59.757 real 0m0.165s 00:19:59.757 user 0m0.082s 00:19:59.757 sys 0m0.081s 00:19:59.757 09:48:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.757 09:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:59.757 ************************************ 00:19:59.757 END TEST dd_no_output 00:19:59.757 ************************************ 00:19:59.757 09:48:23 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:19:59.757 09:48:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:59.757 09:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.757 09:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.015 ************************************ 00:20:00.015 START TEST dd_wrong_blocksize 00:20:00.015 ************************************ 00:20:00.015 09:48:23 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:20:00.015 09:48:23 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:20:00.015 09:48:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:00.015 09:48:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:20:00.015 09:48:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.015 09:48:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:00.015 09:48:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.015 09:48:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:00.015 09:48:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.015 09:48:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:00.015 09:48:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.015 09:48:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:00.015 09:48:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:20:00.015 [2024-04-18 09:48:23.264379] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:20:00.015 09:48:23 -- common/autotest_common.sh@641 -- # es=22 00:20:00.015 09:48:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:00.015 09:48:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:00.015 09:48:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:00.015 00:20:00.015 real 0m0.159s 00:20:00.015 user 0m0.089s 00:20:00.015 sys 0m0.068s 00:20:00.015 09:48:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:00.015 09:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.015 ************************************ 00:20:00.015 END TEST dd_wrong_blocksize 00:20:00.015 ************************************ 00:20:00.015 09:48:23 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:20:00.015 09:48:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:00.015 09:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:00.015 09:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.273 ************************************ 00:20:00.273 START TEST dd_smaller_blocksize 00:20:00.273 ************************************ 00:20:00.273 09:48:23 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:20:00.273 09:48:23 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:20:00.273 09:48:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:00.273 09:48:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:20:00.273 09:48:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.273 09:48:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:00.273 09:48:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.273 09:48:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:00.273 09:48:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.273 09:48:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:00.273 09:48:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.273 09:48:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:00.274 09:48:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:20:00.274 [2024-04-18 09:48:23.541722] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:00.274 [2024-04-18 09:48:23.541917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67311 ] 00:20:00.532 [2024-04-18 09:48:23.716609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.791 [2024-04-18 09:48:24.003564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.359 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:20:01.359 [2024-04-18 09:48:24.706250] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:20:01.359 [2024-04-18 09:48:24.706396] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:02.327 [2024-04-18 09:48:25.449243] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:02.586 09:48:25 -- common/autotest_common.sh@641 -- # es=244 00:20:02.586 09:48:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:02.586 09:48:25 -- common/autotest_common.sh@650 -- # es=116 00:20:02.586 09:48:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:20:02.586 09:48:25 -- common/autotest_common.sh@658 -- # es=1 00:20:02.586 09:48:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:02.586 00:20:02.586 real 0m2.408s 00:20:02.586 user 0m1.730s 00:20:02.586 sys 0m0.563s 00:20:02.586 09:48:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.586 09:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:02.587 ************************************ 00:20:02.587 END TEST dd_smaller_blocksize 00:20:02.587 ************************************ 00:20:02.587 09:48:25 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:20:02.587 09:48:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:02.587 09:48:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.587 09:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:02.587 ************************************ 00:20:02.587 START TEST dd_invalid_count 00:20:02.587 ************************************ 00:20:02.587 09:48:25 -- common/autotest_common.sh@1111 -- # invalid_count 00:20:02.587 09:48:25 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:20:02.587 09:48:25 -- common/autotest_common.sh@638 -- # local es=0 00:20:02.587 09:48:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:20:02.587 09:48:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.587 09:48:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.587 09:48:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.587 09:48:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.587 09:48:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.587 09:48:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.587 09:48:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.587 09:48:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:02.587 09:48:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:20:02.846 [2024-04-18 09:48:26.046235] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:20:02.846 09:48:26 -- common/autotest_common.sh@641 -- # es=22 00:20:02.846 09:48:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:02.846 09:48:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:02.846 09:48:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:02.846 00:20:02.846 real 0m0.138s 00:20:02.846 user 0m0.078s 00:20:02.846 sys 0m0.059s 00:20:02.846 09:48:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.846 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:02.846 ************************************ 00:20:02.846 END TEST dd_invalid_count 00:20:02.846 ************************************ 00:20:02.846 09:48:26 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:20:02.846 09:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:02.846 09:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.846 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:02.846 ************************************ 00:20:02.846 START TEST dd_invalid_oflag 00:20:02.846 ************************************ 00:20:02.846 09:48:26 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:20:02.846 09:48:26 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:20:02.846 09:48:26 -- common/autotest_common.sh@638 -- # local es=0 00:20:02.846 09:48:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:20:02.846 09:48:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.846 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.846 09:48:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.846 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.846 09:48:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.846 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.846 09:48:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:02.846 09:48:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:02.846 09:48:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:20:03.105 [2024-04-18 09:48:26.304207] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:20:03.105 09:48:26 -- common/autotest_common.sh@641 -- # es=22 00:20:03.105 09:48:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:03.105 09:48:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:03.105 09:48:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:03.105 00:20:03.105 real 0m0.137s 00:20:03.105 user 0m0.064s 00:20:03.105 sys 0m0.072s 00:20:03.105 09:48:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:03.105 ************************************ 00:20:03.105 END TEST dd_invalid_oflag 00:20:03.105 ************************************ 00:20:03.105 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.105 09:48:26 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:20:03.105 09:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:03.105 09:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:03.105 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.105 ************************************ 00:20:03.105 START TEST dd_invalid_iflag 00:20:03.105 ************************************ 00:20:03.105 09:48:26 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:20:03.105 09:48:26 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:20:03.105 09:48:26 -- common/autotest_common.sh@638 -- # local es=0 00:20:03.105 09:48:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:20:03.105 09:48:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.105 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.105 09:48:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.105 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.105 09:48:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.105 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.105 09:48:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.105 09:48:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:03.105 09:48:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:20:03.364 [2024-04-18 09:48:26.577974] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:20:03.364 09:48:26 -- common/autotest_common.sh@641 -- # es=22 00:20:03.364 09:48:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:03.364 09:48:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:03.364 09:48:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:03.364 00:20:03.364 real 0m0.164s 00:20:03.364 user 0m0.088s 00:20:03.364 sys 0m0.075s 00:20:03.365 ************************************ 00:20:03.365 END TEST dd_invalid_iflag 00:20:03.365 ************************************ 00:20:03.365 09:48:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:03.365 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.365 09:48:26 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:20:03.365 09:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:03.365 09:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:03.365 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.365 ************************************ 00:20:03.365 START TEST dd_unknown_flag 00:20:03.365 ************************************ 00:20:03.365 09:48:26 -- common/autotest_common.sh@1111 -- # unknown_flag 00:20:03.365 09:48:26 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:20:03.624 09:48:26 -- common/autotest_common.sh@638 -- # local es=0 00:20:03.624 09:48:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:20:03.624 09:48:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.624 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.624 09:48:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.624 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.624 09:48:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.624 09:48:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.624 09:48:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.624 09:48:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:03.624 09:48:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:20:03.624 [2024-04-18 09:48:26.846734] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:03.624 [2024-04-18 09:48:26.846883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67452 ] 00:20:03.624 [2024-04-18 09:48:27.009601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.882 [2024-04-18 09:48:27.271602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.448 [2024-04-18 09:48:27.582747] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:20:04.448 [2024-04-18 09:48:27.582827] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:04.448 [2024-04-18 09:48:27.582912] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:20:04.448 [2024-04-18 09:48:27.582933] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:04.448 [2024-04-18 09:48:27.583236] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:20:04.448 [2024-04-18 09:48:27.583261] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:04.448 [2024-04-18 09:48:27.583337] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:20:04.448 [2024-04-18 09:48:27.583353] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:20:05.016 [2024-04-18 09:48:28.357394] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:05.583 09:48:28 -- common/autotest_common.sh@641 -- # es=234 00:20:05.583 09:48:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:05.583 09:48:28 -- common/autotest_common.sh@650 -- # es=106 00:20:05.583 09:48:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:20:05.583 09:48:28 -- common/autotest_common.sh@658 -- # es=1 00:20:05.583 09:48:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:05.583 00:20:05.583 real 0m2.016s 00:20:05.583 user 0m1.641s 00:20:05.583 sys 0m0.266s 00:20:05.583 09:48:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:05.583 ************************************ 00:20:05.583 END TEST dd_unknown_flag 00:20:05.583 ************************************ 00:20:05.583 09:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:05.583 09:48:28 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:20:05.583 09:48:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:05.583 09:48:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:05.583 09:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:05.583 ************************************ 00:20:05.583 START TEST dd_invalid_json 00:20:05.583 ************************************ 00:20:05.583 09:48:28 -- common/autotest_common.sh@1111 -- # invalid_json 00:20:05.583 09:48:28 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:20:05.583 09:48:28 -- dd/negative_dd.sh@95 -- # : 00:20:05.583 09:48:28 -- common/autotest_common.sh@638 -- # local es=0 00:20:05.583 09:48:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:20:05.583 09:48:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:05.583 09:48:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:05.583 09:48:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:05.583 09:48:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:05.584 09:48:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:05.584 09:48:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:05.584 09:48:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:05.584 09:48:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:05.584 09:48:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:20:05.842 [2024-04-18 09:48:29.010351] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:05.842 [2024-04-18 09:48:29.010528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67499 ] 00:20:05.842 [2024-04-18 09:48:29.186273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.100 [2024-04-18 09:48:29.478685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.100 [2024-04-18 09:48:29.478790] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:20:06.100 [2024-04-18 09:48:29.478817] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:06.100 [2024-04-18 09:48:29.478835] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:06.100 [2024-04-18 09:48:29.478936] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:06.675 09:48:29 -- common/autotest_common.sh@641 -- # es=234 00:20:06.675 09:48:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:06.675 09:48:29 -- common/autotest_common.sh@650 -- # es=106 00:20:06.675 09:48:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:20:06.675 09:48:29 -- common/autotest_common.sh@658 -- # es=1 00:20:06.675 09:48:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:06.675 00:20:06.675 real 0m0.997s 00:20:06.675 user 0m0.731s 00:20:06.675 sys 0m0.158s 00:20:06.675 09:48:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.675 ************************************ 00:20:06.675 END TEST dd_invalid_json 00:20:06.675 ************************************ 00:20:06.675 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:06.675 ************************************ 00:20:06.675 END TEST spdk_dd_negative 00:20:06.675 ************************************ 00:20:06.675 00:20:06.675 real 0m8.358s 00:20:06.675 user 0m5.417s 00:20:06.675 sys 0m2.387s 00:20:06.675 09:48:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.675 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:06.675 ************************************ 00:20:06.675 END TEST spdk_dd 00:20:06.675 ************************************ 00:20:06.675 00:20:06.675 real 3m24.467s 00:20:06.675 user 2m44.393s 00:20:06.675 sys 1m12.451s 00:20:06.675 09:48:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.675 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:06.675 09:48:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:06.675 09:48:30 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:20:06.675 09:48:30 -- spdk/autotest.sh@258 -- # timing_exit lib 00:20:06.675 09:48:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:06.675 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:06.675 09:48:30 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:20:06.675 09:48:30 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:20:06.675 09:48:30 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:20:06.675 09:48:30 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:20:06.675 09:48:30 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:20:06.675 09:48:30 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:20:06.676 09:48:30 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:20:06.676 09:48:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.676 09:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.676 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:06.935 ************************************ 00:20:06.935 START TEST nvmf_tcp 00:20:06.935 ************************************ 00:20:06.935 09:48:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:20:06.935 * Looking for test storage... 00:20:06.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@10 -- # uname -s 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.935 09:48:30 -- nvmf/common.sh@7 -- # uname -s 00:20:06.935 09:48:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.935 09:48:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.935 09:48:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.935 09:48:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.935 09:48:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.935 09:48:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.935 09:48:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.935 09:48:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.935 09:48:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.935 09:48:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.935 09:48:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:06.935 09:48:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:06.935 09:48:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.935 09:48:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.935 09:48:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.935 09:48:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.935 09:48:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.935 09:48:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.935 09:48:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.935 09:48:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.935 09:48:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.935 09:48:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.935 09:48:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.935 09:48:30 -- paths/export.sh@5 -- # export PATH 00:20:06.935 09:48:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.935 09:48:30 -- nvmf/common.sh@47 -- # : 0 00:20:06.935 09:48:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.935 09:48:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.935 09:48:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.935 09:48:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.935 09:48:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.935 09:48:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.935 09:48:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.935 09:48:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:20:06.935 09:48:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:06.935 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:20:06.935 09:48:30 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:06.935 09:48:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.935 09:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.935 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:06.935 ************************************ 00:20:06.935 START TEST nvmf_host_management 00:20:06.935 ************************************ 00:20:06.935 09:48:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:07.194 * Looking for test storage... 00:20:07.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:07.194 09:48:30 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.194 09:48:30 -- nvmf/common.sh@7 -- # uname -s 00:20:07.194 09:48:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.194 09:48:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.194 09:48:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.194 09:48:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.194 09:48:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.194 09:48:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.194 09:48:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.194 09:48:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.194 09:48:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.194 09:48:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.194 09:48:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:07.194 09:48:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:07.195 09:48:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.195 09:48:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.195 09:48:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.195 09:48:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.195 09:48:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.195 09:48:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.195 09:48:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.195 09:48:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.195 09:48:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.195 09:48:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.195 09:48:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.195 09:48:30 -- paths/export.sh@5 -- # export PATH 00:20:07.195 09:48:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.195 09:48:30 -- nvmf/common.sh@47 -- # : 0 00:20:07.195 09:48:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.195 09:48:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.195 09:48:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.195 09:48:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.195 09:48:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.195 09:48:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.195 09:48:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.195 09:48:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.195 09:48:30 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.195 09:48:30 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.195 09:48:30 -- target/host_management.sh@105 -- # nvmftestinit 00:20:07.195 09:48:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:07.195 09:48:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.195 09:48:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:07.195 09:48:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:07.195 09:48:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:07.195 09:48:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.195 09:48:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.195 09:48:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.195 09:48:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:07.195 09:48:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:07.195 09:48:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:07.195 09:48:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:07.195 09:48:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:07.195 09:48:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:07.195 09:48:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.195 09:48:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.195 09:48:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:07.195 09:48:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:07.195 09:48:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.195 09:48:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.195 09:48:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.195 09:48:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.195 09:48:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.195 09:48:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.195 09:48:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.195 09:48:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.195 09:48:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:07.195 Cannot find device "nvmf_init_br" 00:20:07.195 09:48:30 -- nvmf/common.sh@154 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:07.195 Cannot find device "nvmf_tgt_br" 00:20:07.195 09:48:30 -- nvmf/common.sh@155 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.195 Cannot find device "nvmf_tgt_br2" 00:20:07.195 09:48:30 -- nvmf/common.sh@156 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:07.195 Cannot find device "nvmf_init_br" 00:20:07.195 09:48:30 -- nvmf/common.sh@157 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:07.195 Cannot find device "nvmf_tgt_br" 00:20:07.195 09:48:30 -- nvmf/common.sh@158 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:07.195 Cannot find device "nvmf_tgt_br2" 00:20:07.195 09:48:30 -- nvmf/common.sh@159 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:07.195 Cannot find device "nvmf_br" 00:20:07.195 09:48:30 -- nvmf/common.sh@160 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:07.195 Cannot find device "nvmf_init_if" 00:20:07.195 09:48:30 -- nvmf/common.sh@161 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.195 09:48:30 -- nvmf/common.sh@162 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.195 09:48:30 -- nvmf/common.sh@163 -- # true 00:20:07.195 09:48:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.195 09:48:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.195 09:48:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.195 09:48:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.195 09:48:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.195 09:48:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.454 09:48:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.454 09:48:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:07.454 09:48:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:07.454 09:48:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:07.454 09:48:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:07.454 09:48:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:07.454 09:48:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:07.454 09:48:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.454 09:48:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.454 09:48:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.454 09:48:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:07.454 09:48:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:07.454 09:48:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.454 09:48:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.454 09:48:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.454 09:48:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.454 09:48:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.454 09:48:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:07.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:20:07.454 00:20:07.454 --- 10.0.0.2 ping statistics --- 00:20:07.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.454 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:07.454 09:48:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:07.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:20:07.454 00:20:07.454 --- 10.0.0.3 ping statistics --- 00:20:07.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.454 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:07.454 09:48:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:07.454 00:20:07.454 --- 10.0.0.1 ping statistics --- 00:20:07.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.454 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:07.454 09:48:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.454 09:48:30 -- nvmf/common.sh@422 -- # return 0 00:20:07.454 09:48:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:07.454 09:48:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.454 09:48:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:07.454 09:48:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:07.454 09:48:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.454 09:48:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:07.454 09:48:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:07.454 09:48:30 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:20:07.454 09:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:07.454 09:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.454 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:07.712 ************************************ 00:20:07.712 START TEST nvmf_host_management 00:20:07.712 ************************************ 00:20:07.712 09:48:30 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:20:07.712 09:48:30 -- target/host_management.sh@69 -- # starttarget 00:20:07.712 09:48:30 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:07.713 09:48:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:07.713 09:48:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:07.713 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:07.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.713 09:48:30 -- nvmf/common.sh@470 -- # nvmfpid=67784 00:20:07.713 09:48:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.713 09:48:30 -- nvmf/common.sh@471 -- # waitforlisten 67784 00:20:07.713 09:48:30 -- common/autotest_common.sh@817 -- # '[' -z 67784 ']' 00:20:07.713 09:48:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.713 09:48:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.713 09:48:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.713 09:48:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.713 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:07.713 [2024-04-18 09:48:31.024677] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:07.713 [2024-04-18 09:48:31.025068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.971 [2024-04-18 09:48:31.218266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.229 [2024-04-18 09:48:31.501081] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.229 [2024-04-18 09:48:31.501369] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.230 [2024-04-18 09:48:31.501562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.230 [2024-04-18 09:48:31.501769] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.230 [2024-04-18 09:48:31.501903] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.230 [2024-04-18 09:48:31.502254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.230 [2024-04-18 09:48:31.502588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:08.230 [2024-04-18 09:48:31.502543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.230 [2024-04-18 09:48:31.502325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.797 09:48:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:08.797 09:48:31 -- common/autotest_common.sh@850 -- # return 0 00:20:08.797 09:48:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:08.797 09:48:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:08.797 09:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 09:48:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.797 09:48:31 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.797 09:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.797 09:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 [2024-04-18 09:48:31.985004] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.797 09:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.797 09:48:32 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:08.797 09:48:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:08.797 09:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 09:48:32 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:08.797 09:48:32 -- target/host_management.sh@23 -- # cat 00:20:08.797 09:48:32 -- target/host_management.sh@30 -- # rpc_cmd 00:20:08.797 09:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.797 09:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 Malloc0 00:20:08.797 [2024-04-18 09:48:32.123855] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.797 09:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.797 09:48:32 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:08.797 09:48:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:08.797 09:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.797 09:48:32 -- target/host_management.sh@73 -- # perfpid=67838 00:20:08.797 09:48:32 -- target/host_management.sh@74 -- # waitforlisten 67838 /var/tmp/bdevperf.sock 00:20:08.797 09:48:32 -- common/autotest_common.sh@817 -- # '[' -z 67838 ']' 00:20:08.797 09:48:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.797 09:48:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:08.797 09:48:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.797 09:48:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:08.797 09:48:32 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:08.797 09:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 09:48:32 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:08.797 09:48:32 -- nvmf/common.sh@521 -- # config=() 00:20:08.797 09:48:32 -- nvmf/common.sh@521 -- # local subsystem config 00:20:08.797 09:48:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:08.797 09:48:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:08.797 { 00:20:08.797 "params": { 00:20:08.797 "name": "Nvme$subsystem", 00:20:08.797 "trtype": "$TEST_TRANSPORT", 00:20:08.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.797 "adrfam": "ipv4", 00:20:08.797 "trsvcid": "$NVMF_PORT", 00:20:08.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.797 "hdgst": ${hdgst:-false}, 00:20:08.797 "ddgst": ${ddgst:-false} 00:20:08.797 }, 00:20:08.797 "method": "bdev_nvme_attach_controller" 00:20:08.797 } 00:20:08.797 EOF 00:20:08.797 )") 00:20:08.797 09:48:32 -- nvmf/common.sh@543 -- # cat 00:20:08.797 09:48:32 -- nvmf/common.sh@545 -- # jq . 00:20:08.797 09:48:32 -- nvmf/common.sh@546 -- # IFS=, 00:20:09.055 09:48:32 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:09.055 "params": { 00:20:09.055 "name": "Nvme0", 00:20:09.055 "trtype": "tcp", 00:20:09.055 "traddr": "10.0.0.2", 00:20:09.055 "adrfam": "ipv4", 00:20:09.055 "trsvcid": "4420", 00:20:09.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.055 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.055 "hdgst": false, 00:20:09.055 "ddgst": false 00:20:09.055 }, 00:20:09.055 "method": "bdev_nvme_attach_controller" 00:20:09.055 }' 00:20:09.055 [2024-04-18 09:48:32.302848] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:09.055 [2024-04-18 09:48:32.303313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67838 ] 00:20:09.314 [2024-04-18 09:48:32.483187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.572 [2024-04-18 09:48:32.780327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.831 Running I/O for 10 seconds... 00:20:10.090 09:48:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:10.090 09:48:33 -- common/autotest_common.sh@850 -- # return 0 00:20:10.090 09:48:33 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.090 09:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.090 09:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.090 09:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.090 09:48:33 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.090 09:48:33 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:10.090 09:48:33 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:10.090 09:48:33 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:10.090 09:48:33 -- target/host_management.sh@52 -- # local ret=1 00:20:10.090 09:48:33 -- target/host_management.sh@53 -- # local i 00:20:10.090 09:48:33 -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:10.090 09:48:33 -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:10.090 09:48:33 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:10.090 09:48:33 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.090 09:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.090 09:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.090 09:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.090 09:48:33 -- target/host_management.sh@55 -- # read_io_count=3 00:20:10.090 09:48:33 -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:20:10.090 09:48:33 -- target/host_management.sh@62 -- # sleep 0.25 00:20:10.411 09:48:33 -- target/host_management.sh@54 -- # (( i-- )) 00:20:10.411 09:48:33 -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:10.411 09:48:33 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:10.411 09:48:33 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.411 09:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.411 09:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.411 09:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.411 09:48:33 -- target/host_management.sh@55 -- # read_io_count=387 00:20:10.411 09:48:33 -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:20:10.411 09:48:33 -- target/host_management.sh@59 -- # ret=0 00:20:10.411 09:48:33 -- target/host_management.sh@60 -- # break 00:20:10.411 09:48:33 -- target/host_management.sh@64 -- # return 0 00:20:10.411 09:48:33 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:10.411 09:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.411 09:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.411 [2024-04-18 09:48:33.614757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.614830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.614867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.614883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.614899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.614913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.614928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.614942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.614957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.614971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.614987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.614999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.411 [2024-04-18 09:48:33.615455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.411 [2024-04-18 09:48:33.615470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.615977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.615990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.616828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.412 [2024-04-18 09:48:33.616841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.617189] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:20:10.412 [2024-04-18 09:48:33.617324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.412 [2024-04-18 09:48:33.617348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.617366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.412 [2024-04-18 09:48:33.617380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.617394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.412 [2024-04-18 09:48:33.617407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.617421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.412 [2024-04-18 09:48:33.617433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.412 [2024-04-18 09:48:33.617446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:20:10.412 [2024-04-18 09:48:33.618701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.412 09:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.412 09:48:33 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:10.412 09:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.412 09:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.412 task offset: 60928 on job bdev=Nvme0n1 fails 00:20:10.412 00:20:10.412 Latency(us) 00:20:10.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.412 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.412 Job: Nvme0n1 ended in about 0.40 seconds with error 00:20:10.412 Verification LBA range: start 0x0 length 0x400 00:20:10.412 Nvme0n1 : 0.40 1113.69 69.61 159.10 0.00 48563.75 3336.38 48377.48 00:20:10.412 =================================================================================================================== 00:20:10.412 Total : 1113.69 69.61 159.10 0.00 48563.75 3336.38 48377.48 00:20:10.412 [2024-04-18 09:48:33.623602] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.412 [2024-04-18 09:48:33.623659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:20:10.412 09:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.412 09:48:33 -- target/host_management.sh@87 -- # sleep 1 00:20:10.412 [2024-04-18 09:48:33.639119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:11.384 09:48:34 -- target/host_management.sh@91 -- # kill -9 67838 00:20:11.384 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67838) - No such process 00:20:11.384 09:48:34 -- target/host_management.sh@91 -- # true 00:20:11.384 09:48:34 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:11.384 09:48:34 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:11.384 09:48:34 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:11.384 09:48:34 -- nvmf/common.sh@521 -- # config=() 00:20:11.384 09:48:34 -- nvmf/common.sh@521 -- # local subsystem config 00:20:11.384 09:48:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:11.384 09:48:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:11.384 { 00:20:11.384 "params": { 00:20:11.384 "name": "Nvme$subsystem", 00:20:11.384 "trtype": "$TEST_TRANSPORT", 00:20:11.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.385 "adrfam": "ipv4", 00:20:11.385 "trsvcid": "$NVMF_PORT", 00:20:11.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.385 "hdgst": ${hdgst:-false}, 00:20:11.385 "ddgst": ${ddgst:-false} 00:20:11.385 }, 00:20:11.385 "method": "bdev_nvme_attach_controller" 00:20:11.385 } 00:20:11.385 EOF 00:20:11.385 )") 00:20:11.385 09:48:34 -- nvmf/common.sh@543 -- # cat 00:20:11.385 09:48:34 -- nvmf/common.sh@545 -- # jq . 00:20:11.385 09:48:34 -- nvmf/common.sh@546 -- # IFS=, 00:20:11.385 09:48:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:11.385 "params": { 00:20:11.385 "name": "Nvme0", 00:20:11.385 "trtype": "tcp", 00:20:11.385 "traddr": "10.0.0.2", 00:20:11.385 "adrfam": "ipv4", 00:20:11.385 "trsvcid": "4420", 00:20:11.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.385 "hdgst": false, 00:20:11.385 "ddgst": false 00:20:11.385 }, 00:20:11.385 "method": "bdev_nvme_attach_controller" 00:20:11.385 }' 00:20:11.385 [2024-04-18 09:48:34.741019] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:11.385 [2024-04-18 09:48:34.741221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67885 ] 00:20:11.643 [2024-04-18 09:48:34.922224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.901 [2024-04-18 09:48:35.247635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.468 Running I/O for 1 seconds... 00:20:13.408 00:20:13.408 Latency(us) 00:20:13.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.409 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.409 Verification LBA range: start 0x0 length 0x400 00:20:13.409 Nvme0n1 : 1.03 1179.09 73.69 0.00 0.00 53156.15 7238.75 52667.11 00:20:13.409 =================================================================================================================== 00:20:13.409 Total : 1179.09 73.69 0.00 0.00 53156.15 7238.75 52667.11 00:20:14.783 09:48:38 -- target/host_management.sh@102 -- # stoptarget 00:20:14.783 09:48:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:14.783 09:48:38 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:20:15.040 09:48:38 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:15.040 09:48:38 -- target/host_management.sh@40 -- # nvmftestfini 00:20:15.040 09:48:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:15.040 09:48:38 -- nvmf/common.sh@117 -- # sync 00:20:15.040 09:48:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.040 09:48:38 -- nvmf/common.sh@120 -- # set +e 00:20:15.040 09:48:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.040 09:48:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.040 rmmod nvme_tcp 00:20:15.040 rmmod nvme_fabrics 00:20:15.040 rmmod nvme_keyring 00:20:15.040 09:48:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.040 09:48:38 -- nvmf/common.sh@124 -- # set -e 00:20:15.040 09:48:38 -- nvmf/common.sh@125 -- # return 0 00:20:15.040 09:48:38 -- nvmf/common.sh@478 -- # '[' -n 67784 ']' 00:20:15.040 09:48:38 -- nvmf/common.sh@479 -- # killprocess 67784 00:20:15.040 09:48:38 -- common/autotest_common.sh@936 -- # '[' -z 67784 ']' 00:20:15.040 09:48:38 -- common/autotest_common.sh@940 -- # kill -0 67784 00:20:15.040 09:48:38 -- common/autotest_common.sh@941 -- # uname 00:20:15.040 09:48:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.040 09:48:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67784 00:20:15.298 09:48:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:15.298 09:48:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:15.298 killing process with pid 67784 00:20:15.298 09:48:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67784' 00:20:15.298 09:48:38 -- common/autotest_common.sh@955 -- # kill 67784 00:20:15.298 09:48:38 -- common/autotest_common.sh@960 -- # wait 67784 00:20:16.676 [2024-04-18 09:48:39.732451] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:16.676 09:48:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:16.676 09:48:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:16.676 09:48:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:16.676 09:48:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.676 09:48:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.676 09:48:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.676 09:48:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.676 09:48:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.676 09:48:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:16.676 00:20:16.676 real 0m8.955s 00:20:16.676 user 0m37.223s 00:20:16.676 sys 0m1.704s 00:20:16.676 09:48:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.676 09:48:39 -- common/autotest_common.sh@10 -- # set +x 00:20:16.676 ************************************ 00:20:16.676 END TEST nvmf_host_management 00:20:16.676 ************************************ 00:20:16.676 09:48:39 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:16.676 00:20:16.676 real 0m9.586s 00:20:16.676 user 0m37.378s 00:20:16.676 sys 0m1.977s 00:20:16.676 09:48:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.676 09:48:39 -- common/autotest_common.sh@10 -- # set +x 00:20:16.676 ************************************ 00:20:16.676 END TEST nvmf_host_management 00:20:16.676 ************************************ 00:20:16.676 09:48:39 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:16.676 09:48:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.676 09:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.676 09:48:39 -- common/autotest_common.sh@10 -- # set +x 00:20:16.676 ************************************ 00:20:16.676 START TEST nvmf_lvol 00:20:16.676 ************************************ 00:20:16.676 09:48:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:16.936 * Looking for test storage... 00:20:16.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.936 09:48:40 -- nvmf/common.sh@7 -- # uname -s 00:20:16.936 09:48:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.936 09:48:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.936 09:48:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.936 09:48:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.936 09:48:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.936 09:48:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.936 09:48:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.936 09:48:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.936 09:48:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.936 09:48:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.936 09:48:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:16.936 09:48:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:16.936 09:48:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.936 09:48:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.936 09:48:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.936 09:48:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.936 09:48:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.936 09:48:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.936 09:48:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.936 09:48:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.936 09:48:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.936 09:48:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.936 09:48:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.936 09:48:40 -- paths/export.sh@5 -- # export PATH 00:20:16.936 09:48:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.936 09:48:40 -- nvmf/common.sh@47 -- # : 0 00:20:16.936 09:48:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.936 09:48:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.936 09:48:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.936 09:48:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.936 09:48:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.936 09:48:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.936 09:48:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.936 09:48:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:16.936 09:48:40 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:16.936 09:48:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:16.936 09:48:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.936 09:48:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:16.936 09:48:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:16.936 09:48:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:16.936 09:48:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.936 09:48:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.936 09:48:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.936 09:48:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:16.936 09:48:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:16.936 09:48:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:16.936 09:48:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:16.936 09:48:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:16.936 09:48:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:16.936 09:48:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.936 09:48:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.936 09:48:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:16.936 09:48:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:16.936 09:48:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.936 09:48:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.936 09:48:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.936 09:48:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.936 09:48:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.936 09:48:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.936 09:48:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.936 09:48:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.936 09:48:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:16.936 09:48:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:16.936 Cannot find device "nvmf_tgt_br" 00:20:16.936 09:48:40 -- nvmf/common.sh@155 -- # true 00:20:16.936 09:48:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.936 Cannot find device "nvmf_tgt_br2" 00:20:16.936 09:48:40 -- nvmf/common.sh@156 -- # true 00:20:16.936 09:48:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:16.936 09:48:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:16.936 Cannot find device "nvmf_tgt_br" 00:20:16.936 09:48:40 -- nvmf/common.sh@158 -- # true 00:20:16.936 09:48:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:16.936 Cannot find device "nvmf_tgt_br2" 00:20:16.936 09:48:40 -- nvmf/common.sh@159 -- # true 00:20:16.936 09:48:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:16.936 09:48:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:16.936 09:48:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.936 09:48:40 -- nvmf/common.sh@162 -- # true 00:20:16.936 09:48:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.936 09:48:40 -- nvmf/common.sh@163 -- # true 00:20:16.936 09:48:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:16.936 09:48:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.936 09:48:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.196 09:48:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.196 09:48:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.196 09:48:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.196 09:48:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.196 09:48:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:17.196 09:48:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:17.196 09:48:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:17.196 09:48:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:17.196 09:48:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:17.196 09:48:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:17.196 09:48:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.196 09:48:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.196 09:48:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.196 09:48:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:17.196 09:48:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:17.196 09:48:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.196 09:48:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.196 09:48:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.196 09:48:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.196 09:48:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.196 09:48:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:17.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:20:17.196 00:20:17.196 --- 10.0.0.2 ping statistics --- 00:20:17.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.196 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:17.196 09:48:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:17.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:17.196 00:20:17.196 --- 10.0.0.3 ping statistics --- 00:20:17.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.196 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:17.196 09:48:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:17.196 00:20:17.196 --- 10.0.0.1 ping statistics --- 00:20:17.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.196 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:17.196 09:48:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.196 09:48:40 -- nvmf/common.sh@422 -- # return 0 00:20:17.196 09:48:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:17.196 09:48:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.196 09:48:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:17.196 09:48:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:17.196 09:48:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.196 09:48:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:17.196 09:48:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:17.196 09:48:40 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:17.196 09:48:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:17.196 09:48:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.196 09:48:40 -- common/autotest_common.sh@10 -- # set +x 00:20:17.196 09:48:40 -- nvmf/common.sh@470 -- # nvmfpid=68152 00:20:17.196 09:48:40 -- nvmf/common.sh@471 -- # waitforlisten 68152 00:20:17.196 09:48:40 -- common/autotest_common.sh@817 -- # '[' -z 68152 ']' 00:20:17.196 09:48:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.196 09:48:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:17.196 09:48:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.196 09:48:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.196 09:48:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.196 09:48:40 -- common/autotest_common.sh@10 -- # set +x 00:20:17.456 [2024-04-18 09:48:40.709974] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:17.456 [2024-04-18 09:48:40.710223] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.715 [2024-04-18 09:48:40.899064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:17.974 [2024-04-18 09:48:41.196378] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.974 [2024-04-18 09:48:41.196464] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.974 [2024-04-18 09:48:41.196488] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.974 [2024-04-18 09:48:41.196519] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.974 [2024-04-18 09:48:41.196538] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.974 [2024-04-18 09:48:41.196878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.974 [2024-04-18 09:48:41.196992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.974 [2024-04-18 09:48:41.197008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.542 09:48:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:18.542 09:48:41 -- common/autotest_common.sh@850 -- # return 0 00:20:18.542 09:48:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:18.542 09:48:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:18.542 09:48:41 -- common/autotest_common.sh@10 -- # set +x 00:20:18.542 09:48:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.542 09:48:41 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:18.801 [2024-04-18 09:48:41.978924] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.801 09:48:42 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:19.059 09:48:42 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:19.059 09:48:42 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:19.626 09:48:42 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:19.626 09:48:42 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:19.899 09:48:43 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:20.164 09:48:43 -- target/nvmf_lvol.sh@29 -- # lvs=4f97b87d-246f-4e9c-88c8-d7910f5bf703 00:20:20.164 09:48:43 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4f97b87d-246f-4e9c-88c8-d7910f5bf703 lvol 20 00:20:20.422 09:48:43 -- target/nvmf_lvol.sh@32 -- # lvol=588ac3e9-44d1-4931-a237-ea70e40127da 00:20:20.422 09:48:43 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:20.681 09:48:43 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 588ac3e9-44d1-4931-a237-ea70e40127da 00:20:20.939 09:48:44 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:21.198 [2024-04-18 09:48:44.461372] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.198 09:48:44 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:21.456 09:48:44 -- target/nvmf_lvol.sh@42 -- # perf_pid=68228 00:20:21.456 09:48:44 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:21.456 09:48:44 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:22.391 09:48:45 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 588ac3e9-44d1-4931-a237-ea70e40127da MY_SNAPSHOT 00:20:22.958 09:48:46 -- target/nvmf_lvol.sh@47 -- # snapshot=151eb762-2566-4b1e-a088-986d69888210 00:20:22.958 09:48:46 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 588ac3e9-44d1-4931-a237-ea70e40127da 30 00:20:23.215 09:48:46 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 151eb762-2566-4b1e-a088-986d69888210 MY_CLONE 00:20:23.473 09:48:46 -- target/nvmf_lvol.sh@49 -- # clone=2597d769-7acb-496a-a789-02145282f006 00:20:23.473 09:48:46 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2597d769-7acb-496a-a789-02145282f006 00:20:24.040 09:48:47 -- target/nvmf_lvol.sh@53 -- # wait 68228 00:20:32.173 Initializing NVMe Controllers 00:20:32.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:32.173 Controller IO queue size 128, less than required. 00:20:32.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:20:32.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:20:32.173 Initialization complete. Launching workers. 00:20:32.173 ======================================================== 00:20:32.173 Latency(us) 00:20:32.173 Device Information : IOPS MiB/s Average min max 00:20:32.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7190.60 28.09 17808.76 658.12 151760.40 00:20:32.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7091.10 27.70 18068.62 7180.66 169549.84 00:20:32.173 ======================================================== 00:20:32.173 Total : 14281.70 55.79 17937.78 658.12 169549.84 00:20:32.173 00:20:32.173 09:48:55 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:32.173 09:48:55 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 588ac3e9-44d1-4931-a237-ea70e40127da 00:20:32.432 09:48:55 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f97b87d-246f-4e9c-88c8-d7910f5bf703 00:20:32.691 09:48:55 -- target/nvmf_lvol.sh@60 -- # rm -f 00:20:32.691 09:48:56 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:32.691 09:48:56 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:20:32.691 09:48:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:32.691 09:48:56 -- nvmf/common.sh@117 -- # sync 00:20:32.691 09:48:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.691 09:48:56 -- nvmf/common.sh@120 -- # set +e 00:20:32.691 09:48:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.691 09:48:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.691 rmmod nvme_tcp 00:20:32.691 rmmod nvme_fabrics 00:20:32.691 rmmod nvme_keyring 00:20:32.949 09:48:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.949 09:48:56 -- nvmf/common.sh@124 -- # set -e 00:20:32.949 09:48:56 -- nvmf/common.sh@125 -- # return 0 00:20:32.949 09:48:56 -- nvmf/common.sh@478 -- # '[' -n 68152 ']' 00:20:32.949 09:48:56 -- nvmf/common.sh@479 -- # killprocess 68152 00:20:32.949 09:48:56 -- common/autotest_common.sh@936 -- # '[' -z 68152 ']' 00:20:32.949 09:48:56 -- common/autotest_common.sh@940 -- # kill -0 68152 00:20:32.949 09:48:56 -- common/autotest_common.sh@941 -- # uname 00:20:32.949 09:48:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.949 09:48:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68152 00:20:32.949 killing process with pid 68152 00:20:32.949 09:48:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.949 09:48:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.949 09:48:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68152' 00:20:32.949 09:48:56 -- common/autotest_common.sh@955 -- # kill 68152 00:20:32.949 09:48:56 -- common/autotest_common.sh@960 -- # wait 68152 00:20:34.343 09:48:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:34.343 09:48:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:34.343 09:48:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:34.343 09:48:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.343 09:48:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.343 09:48:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.343 09:48:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.343 09:48:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.343 09:48:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:34.343 ************************************ 00:20:34.343 END TEST nvmf_lvol 00:20:34.343 ************************************ 00:20:34.343 00:20:34.343 real 0m17.642s 00:20:34.343 user 1m9.859s 00:20:34.343 sys 0m4.496s 00:20:34.343 09:48:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.343 09:48:57 -- common/autotest_common.sh@10 -- # set +x 00:20:34.343 09:48:57 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:34.343 09:48:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:34.343 09:48:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.343 09:48:57 -- common/autotest_common.sh@10 -- # set +x 00:20:34.602 ************************************ 00:20:34.602 START TEST nvmf_lvs_grow 00:20:34.602 ************************************ 00:20:34.603 09:48:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:34.603 * Looking for test storage... 00:20:34.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:34.603 09:48:57 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.603 09:48:57 -- nvmf/common.sh@7 -- # uname -s 00:20:34.603 09:48:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.603 09:48:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.603 09:48:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.603 09:48:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.603 09:48:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.603 09:48:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.603 09:48:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.603 09:48:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.603 09:48:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.603 09:48:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.603 09:48:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:34.603 09:48:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:20:34.603 09:48:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.603 09:48:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.603 09:48:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.603 09:48:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.603 09:48:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.603 09:48:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.603 09:48:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.603 09:48:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.603 09:48:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.603 09:48:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.603 09:48:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.603 09:48:57 -- paths/export.sh@5 -- # export PATH 00:20:34.603 09:48:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.603 09:48:57 -- nvmf/common.sh@47 -- # : 0 00:20:34.603 09:48:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.603 09:48:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.603 09:48:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.603 09:48:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.603 09:48:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.603 09:48:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.603 09:48:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.603 09:48:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.603 09:48:57 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.603 09:48:57 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.603 09:48:57 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:20:34.603 09:48:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.603 09:48:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.603 09:48:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.603 09:48:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.603 09:48:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.603 09:48:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.603 09:48:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.603 09:48:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.603 09:48:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:34.603 09:48:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:34.603 09:48:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:34.603 09:48:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:34.603 09:48:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:34.603 09:48:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:34.603 09:48:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.603 09:48:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.603 09:48:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:34.603 09:48:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:34.603 09:48:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.603 09:48:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.603 09:48:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.603 09:48:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.603 09:48:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.603 09:48:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.603 09:48:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.603 09:48:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.603 09:48:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:34.603 09:48:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:34.603 Cannot find device "nvmf_tgt_br" 00:20:34.603 09:48:57 -- nvmf/common.sh@155 -- # true 00:20:34.603 09:48:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.603 Cannot find device "nvmf_tgt_br2" 00:20:34.603 09:48:57 -- nvmf/common.sh@156 -- # true 00:20:34.603 09:48:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:34.603 09:48:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:34.603 Cannot find device "nvmf_tgt_br" 00:20:34.603 09:48:57 -- nvmf/common.sh@158 -- # true 00:20:34.603 09:48:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:34.603 Cannot find device "nvmf_tgt_br2" 00:20:34.603 09:48:57 -- nvmf/common.sh@159 -- # true 00:20:34.603 09:48:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:34.603 09:48:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:34.869 09:48:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.869 09:48:58 -- nvmf/common.sh@162 -- # true 00:20:34.869 09:48:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.869 09:48:58 -- nvmf/common.sh@163 -- # true 00:20:34.869 09:48:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.869 09:48:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.869 09:48:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.870 09:48:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.870 09:48:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.870 09:48:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.870 09:48:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.870 09:48:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.870 09:48:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.870 09:48:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:34.870 09:48:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:34.870 09:48:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:34.870 09:48:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:34.870 09:48:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.870 09:48:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.870 09:48:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.870 09:48:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.870 09:48:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.870 09:48:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.870 09:48:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.870 09:48:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.870 09:48:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.870 09:48:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.870 09:48:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:34.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:20:34.870 00:20:34.870 --- 10.0.0.2 ping statistics --- 00:20:34.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.870 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:20:34.870 09:48:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:34.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:34.870 00:20:34.870 --- 10.0.0.3 ping statistics --- 00:20:34.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.870 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:34.870 09:48:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:34.870 00:20:34.870 --- 10.0.0.1 ping statistics --- 00:20:34.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.870 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:34.870 09:48:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.870 09:48:58 -- nvmf/common.sh@422 -- # return 0 00:20:34.870 09:48:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.870 09:48:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.870 09:48:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.870 09:48:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.870 09:48:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.870 09:48:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.870 09:48:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.870 09:48:58 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:20:34.870 09:48:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.870 09:48:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.870 09:48:58 -- common/autotest_common.sh@10 -- # set +x 00:20:34.870 09:48:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:34.870 09:48:58 -- nvmf/common.sh@470 -- # nvmfpid=68573 00:20:34.870 09:48:58 -- nvmf/common.sh@471 -- # waitforlisten 68573 00:20:34.870 09:48:58 -- common/autotest_common.sh@817 -- # '[' -z 68573 ']' 00:20:34.870 09:48:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.870 09:48:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.870 09:48:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.870 09:48:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.870 09:48:58 -- common/autotest_common.sh@10 -- # set +x 00:20:35.131 [2024-04-18 09:48:58.321727] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:35.131 [2024-04-18 09:48:58.322349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.131 [2024-04-18 09:48:58.492422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.390 [2024-04-18 09:48:58.774467] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.390 [2024-04-18 09:48:58.774555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.390 [2024-04-18 09:48:58.774575] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.390 [2024-04-18 09:48:58.774603] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.390 [2024-04-18 09:48:58.774618] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.390 [2024-04-18 09:48:58.774656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.958 09:48:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.958 09:48:59 -- common/autotest_common.sh@850 -- # return 0 00:20:35.958 09:48:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:35.958 09:48:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.958 09:48:59 -- common/autotest_common.sh@10 -- # set +x 00:20:35.958 09:48:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.958 09:48:59 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:36.216 [2024-04-18 09:48:59.585092] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.216 09:48:59 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:20:36.216 09:48:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:36.216 09:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.216 09:48:59 -- common/autotest_common.sh@10 -- # set +x 00:20:36.475 ************************************ 00:20:36.475 START TEST lvs_grow_clean 00:20:36.475 ************************************ 00:20:36.475 09:48:59 -- common/autotest_common.sh@1111 -- # lvs_grow 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:36.475 09:48:59 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:36.733 09:48:59 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:36.733 09:48:59 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:36.991 09:49:00 -- target/nvmf_lvs_grow.sh@28 -- # lvs=56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:36.991 09:49:00 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:36.991 09:49:00 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:37.249 09:49:00 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:37.249 09:49:00 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:37.249 09:49:00 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd lvol 150 00:20:37.507 09:49:00 -- target/nvmf_lvs_grow.sh@33 -- # lvol=32499276-a060-4585-9656-73a9266ce8c7 00:20:37.507 09:49:00 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:37.507 09:49:00 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:37.841 [2024-04-18 09:49:00.920534] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:37.841 [2024-04-18 09:49:00.920685] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:37.841 true 00:20:37.841 09:49:00 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:37.841 09:49:00 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:38.100 09:49:01 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:38.100 09:49:01 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:38.100 09:49:01 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32499276-a060-4585-9656-73a9266ce8c7 00:20:38.359 09:49:01 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.617 [2024-04-18 09:49:01.973412] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.617 09:49:01 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.875 09:49:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68659 00:20:38.875 09:49:02 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:38.875 09:49:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.875 09:49:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68659 /var/tmp/bdevperf.sock 00:20:38.875 09:49:02 -- common/autotest_common.sh@817 -- # '[' -z 68659 ']' 00:20:38.875 09:49:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.875 09:49:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:38.875 09:49:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.875 09:49:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:38.875 09:49:02 -- common/autotest_common.sh@10 -- # set +x 00:20:39.134 [2024-04-18 09:49:02.336921] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:39.134 [2024-04-18 09:49:02.337070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68659 ] 00:20:39.134 [2024-04-18 09:49:02.500973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.392 [2024-04-18 09:49:02.738971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.328 09:49:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.328 09:49:03 -- common/autotest_common.sh@850 -- # return 0 00:20:40.328 09:49:03 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:40.328 Nvme0n1 00:20:40.328 09:49:03 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:40.586 [ 00:20:40.586 { 00:20:40.586 "name": "Nvme0n1", 00:20:40.586 "aliases": [ 00:20:40.586 "32499276-a060-4585-9656-73a9266ce8c7" 00:20:40.586 ], 00:20:40.586 "product_name": "NVMe disk", 00:20:40.587 "block_size": 4096, 00:20:40.587 "num_blocks": 38912, 00:20:40.587 "uuid": "32499276-a060-4585-9656-73a9266ce8c7", 00:20:40.587 "assigned_rate_limits": { 00:20:40.587 "rw_ios_per_sec": 0, 00:20:40.587 "rw_mbytes_per_sec": 0, 00:20:40.587 "r_mbytes_per_sec": 0, 00:20:40.587 "w_mbytes_per_sec": 0 00:20:40.587 }, 00:20:40.587 "claimed": false, 00:20:40.587 "zoned": false, 00:20:40.587 "supported_io_types": { 00:20:40.587 "read": true, 00:20:40.587 "write": true, 00:20:40.587 "unmap": true, 00:20:40.587 "write_zeroes": true, 00:20:40.587 "flush": true, 00:20:40.587 "reset": true, 00:20:40.587 "compare": true, 00:20:40.587 "compare_and_write": true, 00:20:40.587 "abort": true, 00:20:40.587 "nvme_admin": true, 00:20:40.587 "nvme_io": true 00:20:40.587 }, 00:20:40.587 "memory_domains": [ 00:20:40.587 { 00:20:40.587 "dma_device_id": "system", 00:20:40.587 "dma_device_type": 1 00:20:40.587 } 00:20:40.587 ], 00:20:40.587 "driver_specific": { 00:20:40.587 "nvme": [ 00:20:40.587 { 00:20:40.587 "trid": { 00:20:40.587 "trtype": "TCP", 00:20:40.587 "adrfam": "IPv4", 00:20:40.587 "traddr": "10.0.0.2", 00:20:40.587 "trsvcid": "4420", 00:20:40.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:40.587 }, 00:20:40.587 "ctrlr_data": { 00:20:40.587 "cntlid": 1, 00:20:40.587 "vendor_id": "0x8086", 00:20:40.587 "model_number": "SPDK bdev Controller", 00:20:40.587 "serial_number": "SPDK0", 00:20:40.587 "firmware_revision": "24.05", 00:20:40.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.587 "oacs": { 00:20:40.587 "security": 0, 00:20:40.587 "format": 0, 00:20:40.587 "firmware": 0, 00:20:40.587 "ns_manage": 0 00:20:40.587 }, 00:20:40.587 "multi_ctrlr": true, 00:20:40.587 "ana_reporting": false 00:20:40.587 }, 00:20:40.587 "vs": { 00:20:40.587 "nvme_version": "1.3" 00:20:40.587 }, 00:20:40.587 "ns_data": { 00:20:40.587 "id": 1, 00:20:40.587 "can_share": true 00:20:40.587 } 00:20:40.587 } 00:20:40.587 ], 00:20:40.587 "mp_policy": "active_passive" 00:20:40.587 } 00:20:40.587 } 00:20:40.587 ] 00:20:40.587 09:49:03 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.587 09:49:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68683 00:20:40.587 09:49:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:40.846 Running I/O for 10 seconds... 00:20:41.806 Latency(us) 00:20:41.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:41.806 Nvme0n1 : 1.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:20:41.806 =================================================================================================================== 00:20:41.806 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:20:41.806 00:20:42.742 09:49:05 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:42.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:42.742 Nvme0n1 : 2.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:20:42.742 =================================================================================================================== 00:20:42.742 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:20:42.742 00:20:43.001 true 00:20:43.001 09:49:06 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:43.001 09:49:06 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:43.260 09:49:06 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:43.260 09:49:06 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:43.260 09:49:06 -- target/nvmf_lvs_grow.sh@65 -- # wait 68683 00:20:43.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:43.827 Nvme0n1 : 3.00 6011.33 23.48 0.00 0.00 0.00 0.00 0.00 00:20:43.827 =================================================================================================================== 00:20:43.827 Total : 6011.33 23.48 0.00 0.00 0.00 0.00 0.00 00:20:43.827 00:20:44.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:44.762 Nvme0n1 : 4.00 6032.50 23.56 0.00 0.00 0.00 0.00 0.00 00:20:44.762 =================================================================================================================== 00:20:44.762 Total : 6032.50 23.56 0.00 0.00 0.00 0.00 0.00 00:20:44.762 00:20:45.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.697 Nvme0n1 : 5.00 6019.80 23.51 0.00 0.00 0.00 0.00 0.00 00:20:45.697 =================================================================================================================== 00:20:45.697 Total : 6019.80 23.51 0.00 0.00 0.00 0.00 0.00 00:20:45.697 00:20:47.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:47.120 Nvme0n1 : 6.00 6032.50 23.56 0.00 0.00 0.00 0.00 0.00 00:20:47.120 =================================================================================================================== 00:20:47.120 Total : 6032.50 23.56 0.00 0.00 0.00 0.00 0.00 00:20:47.120 00:20:48.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:48.057 Nvme0n1 : 7.00 6023.43 23.53 0.00 0.00 0.00 0.00 0.00 00:20:48.057 =================================================================================================================== 00:20:48.057 Total : 6023.43 23.53 0.00 0.00 0.00 0.00 0.00 00:20:48.057 00:20:48.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:48.993 Nvme0n1 : 8.00 6000.75 23.44 0.00 0.00 0.00 0.00 0.00 00:20:48.993 =================================================================================================================== 00:20:48.993 Total : 6000.75 23.44 0.00 0.00 0.00 0.00 0.00 00:20:48.993 00:20:49.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:49.936 Nvme0n1 : 9.00 5983.11 23.37 0.00 0.00 0.00 0.00 0.00 00:20:49.936 =================================================================================================================== 00:20:49.936 Total : 5983.11 23.37 0.00 0.00 0.00 0.00 0.00 00:20:49.936 00:20:50.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:50.872 Nvme0n1 : 10.00 5981.70 23.37 0.00 0.00 0.00 0.00 0.00 00:20:50.872 =================================================================================================================== 00:20:50.872 Total : 5981.70 23.37 0.00 0.00 0.00 0.00 0.00 00:20:50.872 00:20:50.872 00:20:50.872 Latency(us) 00:20:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:50.872 Nvme0n1 : 10.01 5977.77 23.35 0.00 0.00 21402.25 18945.86 45279.42 00:20:50.872 =================================================================================================================== 00:20:50.872 Total : 5977.77 23.35 0.00 0.00 21402.25 18945.86 45279.42 00:20:50.872 0 00:20:50.872 09:49:14 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68659 00:20:50.872 09:49:14 -- common/autotest_common.sh@936 -- # '[' -z 68659 ']' 00:20:50.872 09:49:14 -- common/autotest_common.sh@940 -- # kill -0 68659 00:20:50.872 09:49:14 -- common/autotest_common.sh@941 -- # uname 00:20:50.872 09:49:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.872 09:49:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68659 00:20:50.872 killing process with pid 68659 00:20:50.872 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.872 00:20:50.872 Latency(us) 00:20:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.872 =================================================================================================================== 00:20:50.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.872 09:49:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:50.872 09:49:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:50.872 09:49:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68659' 00:20:50.872 09:49:14 -- common/autotest_common.sh@955 -- # kill 68659 00:20:50.872 09:49:14 -- common/autotest_common.sh@960 -- # wait 68659 00:20:52.251 09:49:15 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:52.251 09:49:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:20:52.251 09:49:15 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:52.509 09:49:15 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:20:52.509 09:49:15 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:20:52.509 09:49:15 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:52.768 [2024-04-18 09:49:16.083094] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:52.768 09:49:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:52.768 09:49:16 -- common/autotest_common.sh@638 -- # local es=0 00:20:52.768 09:49:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:52.768 09:49:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:52.768 09:49:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.768 09:49:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:52.768 09:49:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.768 09:49:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:52.768 09:49:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.768 09:49:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:52.768 09:49:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:52.768 09:49:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:53.026 request: 00:20:53.026 { 00:20:53.026 "uuid": "56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd", 00:20:53.026 "method": "bdev_lvol_get_lvstores", 00:20:53.026 "req_id": 1 00:20:53.026 } 00:20:53.026 Got JSON-RPC error response 00:20:53.026 response: 00:20:53.026 { 00:20:53.026 "code": -19, 00:20:53.026 "message": "No such device" 00:20:53.026 } 00:20:53.026 09:49:16 -- common/autotest_common.sh@641 -- # es=1 00:20:53.026 09:49:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:53.026 09:49:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:53.026 09:49:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:53.026 09:49:16 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:53.593 aio_bdev 00:20:53.593 09:49:16 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 32499276-a060-4585-9656-73a9266ce8c7 00:20:53.593 09:49:16 -- common/autotest_common.sh@885 -- # local bdev_name=32499276-a060-4585-9656-73a9266ce8c7 00:20:53.593 09:49:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:53.593 09:49:16 -- common/autotest_common.sh@887 -- # local i 00:20:53.593 09:49:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:53.593 09:49:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:53.593 09:49:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:53.851 09:49:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 32499276-a060-4585-9656-73a9266ce8c7 -t 2000 00:20:54.110 [ 00:20:54.110 { 00:20:54.110 "name": "32499276-a060-4585-9656-73a9266ce8c7", 00:20:54.110 "aliases": [ 00:20:54.110 "lvs/lvol" 00:20:54.110 ], 00:20:54.110 "product_name": "Logical Volume", 00:20:54.110 "block_size": 4096, 00:20:54.110 "num_blocks": 38912, 00:20:54.110 "uuid": "32499276-a060-4585-9656-73a9266ce8c7", 00:20:54.110 "assigned_rate_limits": { 00:20:54.110 "rw_ios_per_sec": 0, 00:20:54.110 "rw_mbytes_per_sec": 0, 00:20:54.110 "r_mbytes_per_sec": 0, 00:20:54.110 "w_mbytes_per_sec": 0 00:20:54.110 }, 00:20:54.110 "claimed": false, 00:20:54.110 "zoned": false, 00:20:54.110 "supported_io_types": { 00:20:54.110 "read": true, 00:20:54.110 "write": true, 00:20:54.110 "unmap": true, 00:20:54.110 "write_zeroes": true, 00:20:54.110 "flush": false, 00:20:54.110 "reset": true, 00:20:54.110 "compare": false, 00:20:54.110 "compare_and_write": false, 00:20:54.110 "abort": false, 00:20:54.110 "nvme_admin": false, 00:20:54.110 "nvme_io": false 00:20:54.110 }, 00:20:54.110 "driver_specific": { 00:20:54.110 "lvol": { 00:20:54.110 "lvol_store_uuid": "56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd", 00:20:54.110 "base_bdev": "aio_bdev", 00:20:54.110 "thin_provision": false, 00:20:54.110 "snapshot": false, 00:20:54.110 "clone": false, 00:20:54.110 "esnap_clone": false 00:20:54.110 } 00:20:54.110 } 00:20:54.110 } 00:20:54.110 ] 00:20:54.110 09:49:17 -- common/autotest_common.sh@893 -- # return 0 00:20:54.110 09:49:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:20:54.110 09:49:17 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:54.369 09:49:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:20:54.369 09:49:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:20:54.369 09:49:17 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:54.628 09:49:17 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:20:54.628 09:49:17 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 32499276-a060-4585-9656-73a9266ce8c7 00:20:54.887 09:49:18 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56fd7f6e-2d23-4b8c-93ae-6eddd53dbcdd 00:20:55.144 09:49:18 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:55.402 09:49:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:55.660 ************************************ 00:20:55.660 END TEST lvs_grow_clean 00:20:55.660 ************************************ 00:20:55.660 00:20:55.660 real 0m19.326s 00:20:55.660 user 0m18.411s 00:20:55.660 sys 0m2.482s 00:20:55.660 09:49:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:55.660 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:20:55.660 09:49:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:55.660 09:49:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:55.660 09:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.660 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:20:55.918 ************************************ 00:20:55.918 START TEST lvs_grow_dirty 00:20:55.918 ************************************ 00:20:55.918 09:49:19 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:55.918 09:49:19 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:56.176 09:49:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:56.176 09:49:19 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:56.435 09:49:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:20:56.435 09:49:19 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:20:56.435 09:49:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:56.713 09:49:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:56.713 09:49:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:56.714 09:49:19 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 lvol 150 00:20:56.979 09:49:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:20:56.979 09:49:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:56.979 09:49:20 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:57.238 [2024-04-18 09:49:20.402674] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:57.238 [2024-04-18 09:49:20.402809] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:57.238 true 00:20:57.238 09:49:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:57.238 09:49:20 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:20:57.496 09:49:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:57.496 09:49:20 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:57.755 09:49:20 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:20:58.012 09:49:21 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.012 09:49:21 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:58.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.578 09:49:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68947 00:20:58.578 09:49:21 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:58.578 09:49:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.578 09:49:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68947 /var/tmp/bdevperf.sock 00:20:58.578 09:49:21 -- common/autotest_common.sh@817 -- # '[' -z 68947 ']' 00:20:58.578 09:49:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.578 09:49:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:58.578 09:49:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.578 09:49:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:58.578 09:49:21 -- common/autotest_common.sh@10 -- # set +x 00:20:58.578 [2024-04-18 09:49:21.766821] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:58.578 [2024-04-18 09:49:21.767233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68947 ] 00:20:58.578 [2024-04-18 09:49:21.953897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.836 [2024-04-18 09:49:22.201075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.402 09:49:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:59.402 09:49:22 -- common/autotest_common.sh@850 -- # return 0 00:20:59.402 09:49:22 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:59.659 Nvme0n1 00:20:59.659 09:49:22 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:59.919 [ 00:20:59.919 { 00:20:59.919 "name": "Nvme0n1", 00:20:59.919 "aliases": [ 00:20:59.919 "3e3c2b34-dfa9-4cea-8133-2a0050b0eac5" 00:20:59.919 ], 00:20:59.919 "product_name": "NVMe disk", 00:20:59.919 "block_size": 4096, 00:20:59.919 "num_blocks": 38912, 00:20:59.919 "uuid": "3e3c2b34-dfa9-4cea-8133-2a0050b0eac5", 00:20:59.919 "assigned_rate_limits": { 00:20:59.919 "rw_ios_per_sec": 0, 00:20:59.919 "rw_mbytes_per_sec": 0, 00:20:59.919 "r_mbytes_per_sec": 0, 00:20:59.919 "w_mbytes_per_sec": 0 00:20:59.919 }, 00:20:59.919 "claimed": false, 00:20:59.919 "zoned": false, 00:20:59.919 "supported_io_types": { 00:20:59.919 "read": true, 00:20:59.919 "write": true, 00:20:59.919 "unmap": true, 00:20:59.919 "write_zeroes": true, 00:20:59.919 "flush": true, 00:20:59.919 "reset": true, 00:20:59.919 "compare": true, 00:20:59.919 "compare_and_write": true, 00:20:59.919 "abort": true, 00:20:59.919 "nvme_admin": true, 00:20:59.919 "nvme_io": true 00:20:59.919 }, 00:20:59.919 "memory_domains": [ 00:20:59.919 { 00:20:59.919 "dma_device_id": "system", 00:20:59.919 "dma_device_type": 1 00:20:59.919 } 00:20:59.919 ], 00:20:59.919 "driver_specific": { 00:20:59.919 "nvme": [ 00:20:59.919 { 00:20:59.919 "trid": { 00:20:59.919 "trtype": "TCP", 00:20:59.919 "adrfam": "IPv4", 00:20:59.919 "traddr": "10.0.0.2", 00:20:59.919 "trsvcid": "4420", 00:20:59.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.919 }, 00:20:59.919 "ctrlr_data": { 00:20:59.919 "cntlid": 1, 00:20:59.919 "vendor_id": "0x8086", 00:20:59.919 "model_number": "SPDK bdev Controller", 00:20:59.919 "serial_number": "SPDK0", 00:20:59.919 "firmware_revision": "24.05", 00:20:59.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.919 "oacs": { 00:20:59.919 "security": 0, 00:20:59.919 "format": 0, 00:20:59.919 "firmware": 0, 00:20:59.919 "ns_manage": 0 00:20:59.919 }, 00:20:59.919 "multi_ctrlr": true, 00:20:59.919 "ana_reporting": false 00:20:59.919 }, 00:20:59.919 "vs": { 00:20:59.919 "nvme_version": "1.3" 00:20:59.919 }, 00:20:59.919 "ns_data": { 00:20:59.919 "id": 1, 00:20:59.919 "can_share": true 00:20:59.919 } 00:20:59.919 } 00:20:59.919 ], 00:20:59.919 "mp_policy": "active_passive" 00:20:59.919 } 00:20:59.919 } 00:20:59.919 ] 00:20:59.919 09:49:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68965 00:20:59.919 09:49:23 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:59.919 09:49:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:00.213 Running I/O for 10 seconds... 00:21:01.147 Latency(us) 00:21:01.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:01.147 Nvme0n1 : 1.00 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:21:01.147 =================================================================================================================== 00:21:01.147 Total : 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:21:01.147 00:21:02.083 09:49:25 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:02.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:02.083 Nvme0n1 : 2.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:21:02.083 =================================================================================================================== 00:21:02.083 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:21:02.083 00:21:02.341 true 00:21:02.341 09:49:25 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:02.341 09:49:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:02.598 09:49:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:02.598 09:49:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:02.598 09:49:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 68965 00:21:03.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:03.165 Nvme0n1 : 3.00 6011.33 23.48 0.00 0.00 0.00 0.00 0.00 00:21:03.165 =================================================================================================================== 00:21:03.165 Total : 6011.33 23.48 0.00 0.00 0.00 0.00 0.00 00:21:03.165 00:21:04.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.099 Nvme0n1 : 4.00 6000.75 23.44 0.00 0.00 0.00 0.00 0.00 00:21:04.099 =================================================================================================================== 00:21:04.099 Total : 6000.75 23.44 0.00 0.00 0.00 0.00 0.00 00:21:04.099 00:21:05.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:05.031 Nvme0n1 : 5.00 5927.20 23.15 0.00 0.00 0.00 0.00 0.00 00:21:05.031 =================================================================================================================== 00:21:05.031 Total : 5927.20 23.15 0.00 0.00 0.00 0.00 0.00 00:21:05.031 00:21:06.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.408 Nvme0n1 : 6.00 5849.50 22.85 0.00 0.00 0.00 0.00 0.00 00:21:06.408 =================================================================================================================== 00:21:06.408 Total : 5849.50 22.85 0.00 0.00 0.00 0.00 0.00 00:21:06.408 00:21:07.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:07.060 Nvme0n1 : 7.00 5812.14 22.70 0.00 0.00 0.00 0.00 0.00 00:21:07.060 =================================================================================================================== 00:21:07.060 Total : 5812.14 22.70 0.00 0.00 0.00 0.00 0.00 00:21:07.060 00:21:08.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:08.004 Nvme0n1 : 8.00 5768.25 22.53 0.00 0.00 0.00 0.00 0.00 00:21:08.005 =================================================================================================================== 00:21:08.005 Total : 5768.25 22.53 0.00 0.00 0.00 0.00 0.00 00:21:08.005 00:21:09.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:09.383 Nvme0n1 : 9.00 5776.44 22.56 0.00 0.00 0.00 0.00 0.00 00:21:09.383 =================================================================================================================== 00:21:09.383 Total : 5776.44 22.56 0.00 0.00 0.00 0.00 0.00 00:21:09.383 00:21:10.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:10.337 Nvme0n1 : 10.00 5783.00 22.59 0.00 0.00 0.00 0.00 0.00 00:21:10.337 =================================================================================================================== 00:21:10.337 Total : 5783.00 22.59 0.00 0.00 0.00 0.00 0.00 00:21:10.337 00:21:10.337 00:21:10.337 Latency(us) 00:21:10.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:10.337 Nvme0n1 : 10.01 5791.11 22.62 0.00 0.00 22096.37 15609.48 80549.70 00:21:10.337 =================================================================================================================== 00:21:10.337 Total : 5791.11 22.62 0.00 0.00 22096.37 15609.48 80549.70 00:21:10.337 0 00:21:10.337 09:49:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68947 00:21:10.337 09:49:33 -- common/autotest_common.sh@936 -- # '[' -z 68947 ']' 00:21:10.337 09:49:33 -- common/autotest_common.sh@940 -- # kill -0 68947 00:21:10.337 09:49:33 -- common/autotest_common.sh@941 -- # uname 00:21:10.337 09:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.337 09:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68947 00:21:10.337 09:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:10.337 09:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:10.337 09:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68947' 00:21:10.337 killing process with pid 68947 00:21:10.337 09:49:33 -- common/autotest_common.sh@955 -- # kill 68947 00:21:10.337 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.337 00:21:10.337 Latency(us) 00:21:10.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.337 =================================================================================================================== 00:21:10.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.337 09:49:33 -- common/autotest_common.sh@960 -- # wait 68947 00:21:11.273 09:49:34 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:11.531 09:49:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:11.531 09:49:34 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:11.789 09:49:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:11.789 09:49:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:21:11.789 09:49:35 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 68573 00:21:11.789 09:49:35 -- target/nvmf_lvs_grow.sh@74 -- # wait 68573 00:21:12.047 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 68573 Killed "${NVMF_APP[@]}" "$@" 00:21:12.047 09:49:35 -- target/nvmf_lvs_grow.sh@74 -- # true 00:21:12.047 09:49:35 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:21:12.047 09:49:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:12.047 09:49:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:12.047 09:49:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.047 09:49:35 -- nvmf/common.sh@470 -- # nvmfpid=69109 00:21:12.047 09:49:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:12.047 09:49:35 -- nvmf/common.sh@471 -- # waitforlisten 69109 00:21:12.047 09:49:35 -- common/autotest_common.sh@817 -- # '[' -z 69109 ']' 00:21:12.047 09:49:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.047 09:49:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:12.047 09:49:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.047 09:49:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:12.047 09:49:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.047 [2024-04-18 09:49:35.302617] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:12.047 [2024-04-18 09:49:35.303001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.305 [2024-04-18 09:49:35.472547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.563 [2024-04-18 09:49:35.710112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.563 [2024-04-18 09:49:35.710349] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.563 [2024-04-18 09:49:35.710503] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.563 [2024-04-18 09:49:35.710659] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.563 [2024-04-18 09:49:35.710707] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.563 [2024-04-18 09:49:35.710891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.130 09:49:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:13.130 09:49:36 -- common/autotest_common.sh@850 -- # return 0 00:21:13.130 09:49:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:13.130 09:49:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:13.130 09:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.130 09:49:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.130 09:49:36 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:13.130 [2024-04-18 09:49:36.511306] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:13.130 [2024-04-18 09:49:36.512262] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:13.130 [2024-04-18 09:49:36.512651] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:13.388 09:49:36 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:21:13.388 09:49:36 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:21:13.388 09:49:36 -- common/autotest_common.sh@885 -- # local bdev_name=3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:21:13.388 09:49:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:13.388 09:49:36 -- common/autotest_common.sh@887 -- # local i 00:21:13.388 09:49:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:13.388 09:49:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:13.388 09:49:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:13.698 09:49:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 -t 2000 00:21:13.698 [ 00:21:13.698 { 00:21:13.698 "name": "3e3c2b34-dfa9-4cea-8133-2a0050b0eac5", 00:21:13.698 "aliases": [ 00:21:13.698 "lvs/lvol" 00:21:13.698 ], 00:21:13.698 "product_name": "Logical Volume", 00:21:13.698 "block_size": 4096, 00:21:13.698 "num_blocks": 38912, 00:21:13.698 "uuid": "3e3c2b34-dfa9-4cea-8133-2a0050b0eac5", 00:21:13.698 "assigned_rate_limits": { 00:21:13.698 "rw_ios_per_sec": 0, 00:21:13.698 "rw_mbytes_per_sec": 0, 00:21:13.698 "r_mbytes_per_sec": 0, 00:21:13.698 "w_mbytes_per_sec": 0 00:21:13.698 }, 00:21:13.698 "claimed": false, 00:21:13.698 "zoned": false, 00:21:13.698 "supported_io_types": { 00:21:13.698 "read": true, 00:21:13.698 "write": true, 00:21:13.698 "unmap": true, 00:21:13.698 "write_zeroes": true, 00:21:13.698 "flush": false, 00:21:13.698 "reset": true, 00:21:13.698 "compare": false, 00:21:13.698 "compare_and_write": false, 00:21:13.698 "abort": false, 00:21:13.698 "nvme_admin": false, 00:21:13.698 "nvme_io": false 00:21:13.698 }, 00:21:13.698 "driver_specific": { 00:21:13.698 "lvol": { 00:21:13.698 "lvol_store_uuid": "2bf3c381-e776-4aaf-a05f-a4fc9f458862", 00:21:13.698 "base_bdev": "aio_bdev", 00:21:13.698 "thin_provision": false, 00:21:13.698 "snapshot": false, 00:21:13.698 "clone": false, 00:21:13.698 "esnap_clone": false 00:21:13.698 } 00:21:13.698 } 00:21:13.698 } 00:21:13.698 ] 00:21:13.956 09:49:37 -- common/autotest_common.sh@893 -- # return 0 00:21:13.956 09:49:37 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:13.956 09:49:37 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:21:13.956 09:49:37 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:21:13.956 09:49:37 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:13.956 09:49:37 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:21:14.215 09:49:37 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:21:14.215 09:49:37 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:14.472 [2024-04-18 09:49:37.796497] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:14.472 09:49:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:14.472 09:49:37 -- common/autotest_common.sh@638 -- # local es=0 00:21:14.472 09:49:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:14.472 09:49:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.472 09:49:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:14.473 09:49:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.473 09:49:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:14.473 09:49:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.473 09:49:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:14.473 09:49:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.473 09:49:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:14.473 09:49:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:14.731 request: 00:21:14.731 { 00:21:14.731 "uuid": "2bf3c381-e776-4aaf-a05f-a4fc9f458862", 00:21:14.731 "method": "bdev_lvol_get_lvstores", 00:21:14.731 "req_id": 1 00:21:14.731 } 00:21:14.731 Got JSON-RPC error response 00:21:14.731 response: 00:21:14.731 { 00:21:14.731 "code": -19, 00:21:14.731 "message": "No such device" 00:21:14.731 } 00:21:14.731 09:49:38 -- common/autotest_common.sh@641 -- # es=1 00:21:14.731 09:49:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:14.731 09:49:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:14.731 09:49:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:14.731 09:49:38 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:14.990 aio_bdev 00:21:14.990 09:49:38 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:21:14.990 09:49:38 -- common/autotest_common.sh@885 -- # local bdev_name=3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:21:14.990 09:49:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:14.990 09:49:38 -- common/autotest_common.sh@887 -- # local i 00:21:14.990 09:49:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:14.990 09:49:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:14.990 09:49:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:15.248 09:49:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 -t 2000 00:21:15.507 [ 00:21:15.507 { 00:21:15.507 "name": "3e3c2b34-dfa9-4cea-8133-2a0050b0eac5", 00:21:15.507 "aliases": [ 00:21:15.507 "lvs/lvol" 00:21:15.507 ], 00:21:15.507 "product_name": "Logical Volume", 00:21:15.507 "block_size": 4096, 00:21:15.507 "num_blocks": 38912, 00:21:15.507 "uuid": "3e3c2b34-dfa9-4cea-8133-2a0050b0eac5", 00:21:15.507 "assigned_rate_limits": { 00:21:15.507 "rw_ios_per_sec": 0, 00:21:15.507 "rw_mbytes_per_sec": 0, 00:21:15.507 "r_mbytes_per_sec": 0, 00:21:15.507 "w_mbytes_per_sec": 0 00:21:15.507 }, 00:21:15.507 "claimed": false, 00:21:15.507 "zoned": false, 00:21:15.507 "supported_io_types": { 00:21:15.507 "read": true, 00:21:15.507 "write": true, 00:21:15.507 "unmap": true, 00:21:15.507 "write_zeroes": true, 00:21:15.507 "flush": false, 00:21:15.507 "reset": true, 00:21:15.507 "compare": false, 00:21:15.507 "compare_and_write": false, 00:21:15.507 "abort": false, 00:21:15.507 "nvme_admin": false, 00:21:15.507 "nvme_io": false 00:21:15.507 }, 00:21:15.507 "driver_specific": { 00:21:15.507 "lvol": { 00:21:15.507 "lvol_store_uuid": "2bf3c381-e776-4aaf-a05f-a4fc9f458862", 00:21:15.507 "base_bdev": "aio_bdev", 00:21:15.508 "thin_provision": false, 00:21:15.508 "snapshot": false, 00:21:15.508 "clone": false, 00:21:15.508 "esnap_clone": false 00:21:15.508 } 00:21:15.508 } 00:21:15.508 } 00:21:15.508 ] 00:21:15.508 09:49:38 -- common/autotest_common.sh@893 -- # return 0 00:21:15.508 09:49:38 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:15.508 09:49:38 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:15.765 09:49:39 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:15.765 09:49:39 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:15.765 09:49:39 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:16.024 09:49:39 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:16.024 09:49:39 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3e3c2b34-dfa9-4cea-8133-2a0050b0eac5 00:21:16.283 09:49:39 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2bf3c381-e776-4aaf-a05f-a4fc9f458862 00:21:16.850 09:49:39 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:17.109 09:49:40 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:17.367 00:21:17.367 real 0m21.467s 00:21:17.367 user 0m47.665s 00:21:17.367 sys 0m7.339s 00:21:17.367 09:49:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:17.367 ************************************ 00:21:17.367 END TEST lvs_grow_dirty 00:21:17.367 ************************************ 00:21:17.367 09:49:40 -- common/autotest_common.sh@10 -- # set +x 00:21:17.367 09:49:40 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:17.367 09:49:40 -- common/autotest_common.sh@794 -- # type=--id 00:21:17.367 09:49:40 -- common/autotest_common.sh@795 -- # id=0 00:21:17.367 09:49:40 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:17.367 09:49:40 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:17.367 09:49:40 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:17.367 09:49:40 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:17.367 09:49:40 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:17.367 09:49:40 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:17.367 nvmf_trace.0 00:21:17.367 09:49:40 -- common/autotest_common.sh@809 -- # return 0 00:21:17.367 09:49:40 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:17.367 09:49:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:17.367 09:49:40 -- nvmf/common.sh@117 -- # sync 00:21:17.934 09:49:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.934 09:49:41 -- nvmf/common.sh@120 -- # set +e 00:21:17.934 09:49:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.934 09:49:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.934 rmmod nvme_tcp 00:21:17.934 rmmod nvme_fabrics 00:21:17.934 rmmod nvme_keyring 00:21:17.934 09:49:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.934 09:49:41 -- nvmf/common.sh@124 -- # set -e 00:21:17.934 09:49:41 -- nvmf/common.sh@125 -- # return 0 00:21:17.934 09:49:41 -- nvmf/common.sh@478 -- # '[' -n 69109 ']' 00:21:17.934 09:49:41 -- nvmf/common.sh@479 -- # killprocess 69109 00:21:17.934 09:49:41 -- common/autotest_common.sh@936 -- # '[' -z 69109 ']' 00:21:17.934 09:49:41 -- common/autotest_common.sh@940 -- # kill -0 69109 00:21:17.934 09:49:41 -- common/autotest_common.sh@941 -- # uname 00:21:17.934 09:49:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.934 09:49:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69109 00:21:17.934 killing process with pid 69109 00:21:17.934 09:49:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:17.934 09:49:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:17.934 09:49:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69109' 00:21:17.934 09:49:41 -- common/autotest_common.sh@955 -- # kill 69109 00:21:17.934 09:49:41 -- common/autotest_common.sh@960 -- # wait 69109 00:21:18.869 09:49:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:18.869 09:49:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:18.869 09:49:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:18.869 09:49:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.869 09:49:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.869 09:49:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.869 09:49:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.869 09:49:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.128 09:49:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:19.128 ************************************ 00:21:19.128 END TEST nvmf_lvs_grow 00:21:19.128 ************************************ 00:21:19.128 00:21:19.128 real 0m44.531s 00:21:19.128 user 1m13.623s 00:21:19.128 sys 0m10.877s 00:21:19.128 09:49:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:19.128 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:21:19.128 09:49:42 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:19.128 09:49:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:19.128 09:49:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:19.128 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:21:19.128 ************************************ 00:21:19.128 START TEST nvmf_bdev_io_wait 00:21:19.128 ************************************ 00:21:19.128 09:49:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:19.128 * Looking for test storage... 00:21:19.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:19.128 09:49:42 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.128 09:49:42 -- nvmf/common.sh@7 -- # uname -s 00:21:19.128 09:49:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.128 09:49:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.128 09:49:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.128 09:49:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.128 09:49:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.128 09:49:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.128 09:49:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.128 09:49:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.128 09:49:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.128 09:49:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.128 09:49:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:21:19.128 09:49:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:21:19.128 09:49:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.128 09:49:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.128 09:49:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.128 09:49:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.128 09:49:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.128 09:49:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.128 09:49:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.128 09:49:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.128 09:49:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.128 09:49:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.128 09:49:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.128 09:49:42 -- paths/export.sh@5 -- # export PATH 00:21:19.128 09:49:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.128 09:49:42 -- nvmf/common.sh@47 -- # : 0 00:21:19.387 09:49:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.387 09:49:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.387 09:49:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.387 09:49:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.387 09:49:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.387 09:49:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.387 09:49:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.387 09:49:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.387 09:49:42 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.387 09:49:42 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.387 09:49:42 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:19.387 09:49:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:19.387 09:49:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.387 09:49:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:19.387 09:49:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:19.387 09:49:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:19.387 09:49:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.387 09:49:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.387 09:49:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.387 09:49:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:19.387 09:49:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:19.387 09:49:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:19.387 09:49:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:19.387 09:49:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:19.387 09:49:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:19.387 09:49:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.387 09:49:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.387 09:49:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:19.387 09:49:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:19.387 09:49:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.387 09:49:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.387 09:49:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.387 09:49:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.387 09:49:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.387 09:49:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.387 09:49:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.387 09:49:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.387 09:49:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:19.387 09:49:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:19.387 Cannot find device "nvmf_tgt_br" 00:21:19.387 09:49:42 -- nvmf/common.sh@155 -- # true 00:21:19.387 09:49:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.387 Cannot find device "nvmf_tgt_br2" 00:21:19.387 09:49:42 -- nvmf/common.sh@156 -- # true 00:21:19.387 09:49:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:19.387 09:49:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:19.387 Cannot find device "nvmf_tgt_br" 00:21:19.387 09:49:42 -- nvmf/common.sh@158 -- # true 00:21:19.387 09:49:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:19.387 Cannot find device "nvmf_tgt_br2" 00:21:19.387 09:49:42 -- nvmf/common.sh@159 -- # true 00:21:19.387 09:49:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:19.387 09:49:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:19.387 09:49:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.387 09:49:42 -- nvmf/common.sh@162 -- # true 00:21:19.387 09:49:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.387 09:49:42 -- nvmf/common.sh@163 -- # true 00:21:19.387 09:49:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.387 09:49:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.387 09:49:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.387 09:49:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.387 09:49:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.387 09:49:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.387 09:49:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.387 09:49:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:19.387 09:49:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:19.387 09:49:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:19.387 09:49:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:19.387 09:49:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:19.387 09:49:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:19.387 09:49:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.645 09:49:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.645 09:49:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.645 09:49:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:19.645 09:49:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:19.645 09:49:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.645 09:49:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.645 09:49:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.645 09:49:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.645 09:49:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.645 09:49:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:19.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:21:19.645 00:21:19.645 --- 10.0.0.2 ping statistics --- 00:21:19.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.645 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:19.645 09:49:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:19.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:21:19.645 00:21:19.645 --- 10.0.0.3 ping statistics --- 00:21:19.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.645 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:19.645 09:49:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:21:19.645 00:21:19.645 --- 10.0.0.1 ping statistics --- 00:21:19.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.645 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:21:19.645 09:49:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.645 09:49:42 -- nvmf/common.sh@422 -- # return 0 00:21:19.645 09:49:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:19.645 09:49:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.645 09:49:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:19.645 09:49:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:19.645 09:49:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.645 09:49:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:19.645 09:49:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:19.645 09:49:42 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.645 09:49:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:19.645 09:49:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.645 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:21:19.645 09:49:42 -- nvmf/common.sh@470 -- # nvmfpid=69439 00:21:19.645 09:49:42 -- nvmf/common.sh@471 -- # waitforlisten 69439 00:21:19.645 09:49:42 -- common/autotest_common.sh@817 -- # '[' -z 69439 ']' 00:21:19.645 09:49:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.645 09:49:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:19.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.645 09:49:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.645 09:49:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.645 09:49:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:19.645 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:21:19.645 [2024-04-18 09:49:42.993569] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:19.645 [2024-04-18 09:49:42.993772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.907 [2024-04-18 09:49:43.179956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.169 [2024-04-18 09:49:43.485420] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.169 [2024-04-18 09:49:43.485505] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.169 [2024-04-18 09:49:43.485536] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.169 [2024-04-18 09:49:43.485552] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.169 [2024-04-18 09:49:43.485569] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.169 [2024-04-18 09:49:43.485737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.169 [2024-04-18 09:49:43.486105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.169 [2024-04-18 09:49:43.486461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.169 [2024-04-18 09:49:43.486468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.788 09:49:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:20.788 09:49:43 -- common/autotest_common.sh@850 -- # return 0 00:21:20.788 09:49:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:20.788 09:49:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:20.788 09:49:43 -- common/autotest_common.sh@10 -- # set +x 00:21:20.788 09:49:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.788 09:49:43 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:20.788 09:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.788 09:49:43 -- common/autotest_common.sh@10 -- # set +x 00:21:20.788 09:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.788 09:49:43 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:20.788 09:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.788 09:49:43 -- common/autotest_common.sh@10 -- # set +x 00:21:21.046 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.046 09:49:44 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.046 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.046 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:21:21.046 [2024-04-18 09:49:44.229120] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.046 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.046 09:49:44 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:21.046 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.046 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:21:21.046 Malloc0 00:21:21.046 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.046 09:49:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:21.046 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.046 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:21:21.046 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.046 09:49:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.046 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.046 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:21:21.046 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.046 09:49:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.046 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.046 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:21:21.046 [2024-04-18 09:49:44.372842] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.046 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69480 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # config=() 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=69482 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # local subsystem config 00:21:21.047 09:49:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:21.047 { 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme$subsystem", 00:21:21.047 "trtype": "$TEST_TRANSPORT", 00:21:21.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "$NVMF_PORT", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.047 "hdgst": ${hdgst:-false}, 00:21:21.047 "ddgst": ${ddgst:-false} 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 } 00:21:21.047 EOF 00:21:21.047 )") 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # config=() 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # local subsystem config 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69484 00:21:21.047 09:49:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:21.047 { 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme$subsystem", 00:21:21.047 "trtype": "$TEST_TRANSPORT", 00:21:21.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "$NVMF_PORT", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.047 "hdgst": ${hdgst:-false}, 00:21:21.047 "ddgst": ${ddgst:-false} 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 } 00:21:21.047 EOF 00:21:21.047 )") 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # cat 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # cat 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69488 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@35 -- # sync 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # config=() 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:21.047 09:49:44 -- nvmf/common.sh@545 -- # jq . 00:21:21.047 09:49:44 -- nvmf/common.sh@545 -- # jq . 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # local subsystem config 00:21:21.047 09:49:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:21.047 { 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme$subsystem", 00:21:21.047 "trtype": "$TEST_TRANSPORT", 00:21:21.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "$NVMF_PORT", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.047 "hdgst": ${hdgst:-false}, 00:21:21.047 "ddgst": ${ddgst:-false} 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 } 00:21:21.047 EOF 00:21:21.047 )") 00:21:21.047 09:49:44 -- nvmf/common.sh@546 -- # IFS=, 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:21.047 09:49:44 -- nvmf/common.sh@546 -- # IFS=, 00:21:21.047 09:49:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme1", 00:21:21.047 "trtype": "tcp", 00:21:21.047 "traddr": "10.0.0.2", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "4420", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.047 "hdgst": false, 00:21:21.047 "ddgst": false 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 }' 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # config=() 00:21:21.047 09:49:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme1", 00:21:21.047 "trtype": "tcp", 00:21:21.047 "traddr": "10.0.0.2", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "4420", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.047 "hdgst": false, 00:21:21.047 "ddgst": false 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 }' 00:21:21.047 09:49:44 -- nvmf/common.sh@521 -- # local subsystem config 00:21:21.047 09:49:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:21.047 { 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme$subsystem", 00:21:21.047 "trtype": "$TEST_TRANSPORT", 00:21:21.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "$NVMF_PORT", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.047 "hdgst": ${hdgst:-false}, 00:21:21.047 "ddgst": ${ddgst:-false} 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 } 00:21:21.047 EOF 00:21:21.047 )") 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # cat 00:21:21.047 09:49:44 -- nvmf/common.sh@543 -- # cat 00:21:21.047 09:49:44 -- nvmf/common.sh@545 -- # jq . 00:21:21.047 09:49:44 -- nvmf/common.sh@546 -- # IFS=, 00:21:21.047 09:49:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme1", 00:21:21.047 "trtype": "tcp", 00:21:21.047 "traddr": "10.0.0.2", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "4420", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.047 "hdgst": false, 00:21:21.047 "ddgst": false 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 }' 00:21:21.047 09:49:44 -- nvmf/common.sh@545 -- # jq . 00:21:21.047 09:49:44 -- nvmf/common.sh@546 -- # IFS=, 00:21:21.047 09:49:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:21.047 "params": { 00:21:21.047 "name": "Nvme1", 00:21:21.047 "trtype": "tcp", 00:21:21.047 "traddr": "10.0.0.2", 00:21:21.047 "adrfam": "ipv4", 00:21:21.047 "trsvcid": "4420", 00:21:21.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.047 "hdgst": false, 00:21:21.047 "ddgst": false 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 }' 00:21:21.047 09:49:44 -- target/bdev_io_wait.sh@37 -- # wait 69480 00:21:21.305 [2024-04-18 09:49:44.507773] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:21.305 [2024-04-18 09:49:44.508331] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:21.305 [2024-04-18 09:49:44.511475] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:21.305 [2024-04-18 09:49:44.511782] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:21.305 [2024-04-18 09:49:44.523632] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:21.305 [2024-04-18 09:49:44.524069] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:21.305 [2024-04-18 09:49:44.535421] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:21.305 [2024-04-18 09:49:44.535801] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:21.563 [2024-04-18 09:49:44.791763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.563 [2024-04-18 09:49:44.846147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.563 [2024-04-18 09:49:44.924086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.821 [2024-04-18 09:49:45.042166] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.821 [2024-04-18 09:49:45.075766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.821 [2024-04-18 09:49:45.174592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:22.079 [2024-04-18 09:49:45.276327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:22.079 [2024-04-18 09:49:45.411428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:22.079 Running I/O for 1 seconds... 00:21:22.336 Running I/O for 1 seconds... 00:21:22.336 Running I/O for 1 seconds... 00:21:22.594 Running I/O for 1 seconds... 00:21:23.161 00:21:23.161 Latency(us) 00:21:23.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.161 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:23.161 Nvme1n1 : 1.01 8303.85 32.44 0.00 0.00 15337.11 3991.74 22520.55 00:21:23.161 =================================================================================================================== 00:21:23.161 Total : 8303.85 32.44 0.00 0.00 15337.11 3991.74 22520.55 00:21:23.419 00:21:23.419 Latency(us) 00:21:23.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.419 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:23.419 Nvme1n1 : 1.01 6299.04 24.61 0.00 0.00 20168.04 7864.32 27525.12 00:21:23.419 =================================================================================================================== 00:21:23.419 Total : 6299.04 24.61 0.00 0.00 20168.04 7864.32 27525.12 00:21:23.419 00:21:23.419 Latency(us) 00:21:23.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.419 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:23.419 Nvme1n1 : 1.00 132279.11 516.72 0.00 0.00 964.51 467.32 1645.85 00:21:23.419 =================================================================================================================== 00:21:23.419 Total : 132279.11 516.72 0.00 0.00 964.51 467.32 1645.85 00:21:23.716 00:21:23.716 Latency(us) 00:21:23.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.716 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:23.716 Nvme1n1 : 1.01 7016.37 27.41 0.00 0.00 18140.95 9770.82 31695.59 00:21:23.716 =================================================================================================================== 00:21:23.717 Total : 7016.37 27.41 0.00 0.00 18140.95 9770.82 31695.59 00:21:23.974 09:49:47 -- target/bdev_io_wait.sh@38 -- # wait 69482 00:21:24.540 09:49:47 -- target/bdev_io_wait.sh@39 -- # wait 69484 00:21:24.540 09:49:47 -- target/bdev_io_wait.sh@40 -- # wait 69488 00:21:24.540 09:49:47 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.540 09:49:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.540 09:49:47 -- common/autotest_common.sh@10 -- # set +x 00:21:24.540 09:49:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.540 09:49:47 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:24.540 09:49:47 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:24.540 09:49:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:24.540 09:49:47 -- nvmf/common.sh@117 -- # sync 00:21:24.540 09:49:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.540 09:49:47 -- nvmf/common.sh@120 -- # set +e 00:21:24.540 09:49:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.540 09:49:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.540 rmmod nvme_tcp 00:21:24.540 rmmod nvme_fabrics 00:21:24.540 rmmod nvme_keyring 00:21:24.540 09:49:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.540 09:49:47 -- nvmf/common.sh@124 -- # set -e 00:21:24.540 09:49:47 -- nvmf/common.sh@125 -- # return 0 00:21:24.540 09:49:47 -- nvmf/common.sh@478 -- # '[' -n 69439 ']' 00:21:24.540 09:49:47 -- nvmf/common.sh@479 -- # killprocess 69439 00:21:24.540 09:49:47 -- common/autotest_common.sh@936 -- # '[' -z 69439 ']' 00:21:24.540 09:49:47 -- common/autotest_common.sh@940 -- # kill -0 69439 00:21:24.540 09:49:47 -- common/autotest_common.sh@941 -- # uname 00:21:24.540 09:49:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:24.540 09:49:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69439 00:21:24.798 killing process with pid 69439 00:21:24.798 09:49:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:24.798 09:49:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:24.798 09:49:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69439' 00:21:24.798 09:49:47 -- common/autotest_common.sh@955 -- # kill 69439 00:21:24.798 09:49:47 -- common/autotest_common.sh@960 -- # wait 69439 00:21:25.734 09:49:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:25.734 09:49:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:25.734 09:49:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:25.734 09:49:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.734 09:49:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.734 09:49:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.734 09:49:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.734 09:49:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.734 09:49:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:25.734 ************************************ 00:21:25.734 END TEST nvmf_bdev_io_wait 00:21:25.734 ************************************ 00:21:25.734 00:21:25.734 real 0m6.675s 00:21:25.734 user 0m30.548s 00:21:25.734 sys 0m2.931s 00:21:25.734 09:49:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.734 09:49:49 -- common/autotest_common.sh@10 -- # set +x 00:21:25.992 09:49:49 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:25.992 09:49:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.992 09:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.992 09:49:49 -- common/autotest_common.sh@10 -- # set +x 00:21:25.993 ************************************ 00:21:25.993 START TEST nvmf_queue_depth 00:21:25.993 ************************************ 00:21:25.993 09:49:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:25.993 * Looking for test storage... 00:21:25.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:25.993 09:49:49 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.993 09:49:49 -- nvmf/common.sh@7 -- # uname -s 00:21:25.993 09:49:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.993 09:49:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.993 09:49:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.993 09:49:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.993 09:49:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.993 09:49:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.993 09:49:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.993 09:49:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.993 09:49:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.993 09:49:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.993 09:49:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:21:25.993 09:49:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:21:25.993 09:49:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.993 09:49:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.993 09:49:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.993 09:49:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.993 09:49:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.993 09:49:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.993 09:49:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.993 09:49:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.993 09:49:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.993 09:49:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.993 09:49:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.993 09:49:49 -- paths/export.sh@5 -- # export PATH 00:21:25.993 09:49:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.993 09:49:49 -- nvmf/common.sh@47 -- # : 0 00:21:25.993 09:49:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.993 09:49:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.993 09:49:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.993 09:49:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.993 09:49:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.993 09:49:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.993 09:49:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.993 09:49:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.993 09:49:49 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:25.993 09:49:49 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:25.993 09:49:49 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.993 09:49:49 -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:25.993 09:49:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.993 09:49:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.993 09:49:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.993 09:49:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.993 09:49:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.993 09:49:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.993 09:49:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.993 09:49:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.993 09:49:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:25.993 09:49:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:25.993 09:49:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:25.993 09:49:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:25.993 09:49:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:25.993 09:49:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:25.993 09:49:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.993 09:49:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.993 09:49:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:25.993 09:49:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:25.993 09:49:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:25.993 09:49:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:25.993 09:49:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:25.993 09:49:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.993 09:49:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:25.993 09:49:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:25.993 09:49:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:25.993 09:49:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:25.993 09:49:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:25.993 09:49:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:25.993 Cannot find device "nvmf_tgt_br" 00:21:25.993 09:49:49 -- nvmf/common.sh@155 -- # true 00:21:25.993 09:49:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.993 Cannot find device "nvmf_tgt_br2" 00:21:25.993 09:49:49 -- nvmf/common.sh@156 -- # true 00:21:25.993 09:49:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:25.993 09:49:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:25.993 Cannot find device "nvmf_tgt_br" 00:21:25.993 09:49:49 -- nvmf/common.sh@158 -- # true 00:21:25.993 09:49:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:25.993 Cannot find device "nvmf_tgt_br2" 00:21:25.993 09:49:49 -- nvmf/common.sh@159 -- # true 00:21:25.993 09:49:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:26.252 09:49:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:26.252 09:49:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.252 09:49:49 -- nvmf/common.sh@162 -- # true 00:21:26.252 09:49:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.252 09:49:49 -- nvmf/common.sh@163 -- # true 00:21:26.252 09:49:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:26.252 09:49:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.252 09:49:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.252 09:49:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.252 09:49:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.252 09:49:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.252 09:49:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.252 09:49:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:26.252 09:49:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:26.252 09:49:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:26.252 09:49:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:26.252 09:49:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:26.252 09:49:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:26.252 09:49:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.252 09:49:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.252 09:49:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.252 09:49:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:26.252 09:49:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:26.252 09:49:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.252 09:49:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.252 09:49:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.252 09:49:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.252 09:49:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.252 09:49:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:26.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:26.252 00:21:26.252 --- 10.0.0.2 ping statistics --- 00:21:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.252 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:26.252 09:49:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:26.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:26.252 00:21:26.252 --- 10.0.0.3 ping statistics --- 00:21:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.252 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:26.252 09:49:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:26.252 00:21:26.252 --- 10.0.0.1 ping statistics --- 00:21:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.252 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:26.252 09:49:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.252 09:49:49 -- nvmf/common.sh@422 -- # return 0 00:21:26.252 09:49:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:26.252 09:49:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.252 09:49:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:26.252 09:49:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:26.252 09:49:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.252 09:49:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:26.511 09:49:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:26.511 09:49:49 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:21:26.511 09:49:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:26.511 09:49:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:26.511 09:49:49 -- common/autotest_common.sh@10 -- # set +x 00:21:26.511 09:49:49 -- nvmf/common.sh@470 -- # nvmfpid=69750 00:21:26.511 09:49:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:26.511 09:49:49 -- nvmf/common.sh@471 -- # waitforlisten 69750 00:21:26.511 09:49:49 -- common/autotest_common.sh@817 -- # '[' -z 69750 ']' 00:21:26.511 09:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.511 09:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:26.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.511 09:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.511 09:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:26.511 09:49:49 -- common/autotest_common.sh@10 -- # set +x 00:21:26.511 [2024-04-18 09:49:49.779504] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:26.511 [2024-04-18 09:49:49.779889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.770 [2024-04-18 09:49:49.964257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.062 [2024-04-18 09:49:50.208620] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.062 [2024-04-18 09:49:50.208677] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.062 [2024-04-18 09:49:50.208699] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.062 [2024-04-18 09:49:50.208726] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.062 [2024-04-18 09:49:50.208752] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.062 [2024-04-18 09:49:50.208808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.640 09:49:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:27.640 09:49:50 -- common/autotest_common.sh@850 -- # return 0 00:21:27.640 09:49:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:27.640 09:49:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 09:49:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.640 09:49:50 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.640 09:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 [2024-04-18 09:49:50.786904] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.640 09:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.640 09:49:50 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.640 09:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 Malloc0 00:21:27.640 09:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.640 09:49:50 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.640 09:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 09:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.640 09:49:50 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.640 09:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 09:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.640 09:49:50 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.640 09:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 [2024-04-18 09:49:50.906432] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.640 09:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.640 09:49:50 -- target/queue_depth.sh@30 -- # bdevperf_pid=69782 00:21:27.640 09:49:50 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:21:27.640 09:49:50 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:27.640 09:49:50 -- target/queue_depth.sh@33 -- # waitforlisten 69782 /var/tmp/bdevperf.sock 00:21:27.640 09:49:50 -- common/autotest_common.sh@817 -- # '[' -z 69782 ']' 00:21:27.640 09:49:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.640 09:49:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.640 09:49:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.640 09:49:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.640 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.640 [2024-04-18 09:49:51.010967] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:27.640 [2024-04-18 09:49:51.011163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69782 ] 00:21:27.899 [2024-04-18 09:49:51.186455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.157 [2024-04-18 09:49:51.472317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.724 09:49:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.724 09:49:51 -- common/autotest_common.sh@850 -- # return 0 00:21:28.724 09:49:51 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.724 09:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.724 09:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:28.724 NVMe0n1 00:21:28.724 09:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.724 09:49:52 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.982 Running I/O for 10 seconds... 00:21:38.975 00:21:38.975 Latency(us) 00:21:38.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.975 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:21:38.975 Verification LBA range: start 0x0 length 0x4000 00:21:38.975 NVMe0n1 : 10.13 5419.85 21.17 0.00 0.00 187612.26 28240.06 125829.12 00:21:38.975 =================================================================================================================== 00:21:38.975 Total : 5419.85 21.17 0.00 0.00 187612.26 28240.06 125829.12 00:21:38.975 0 00:21:38.975 09:50:02 -- target/queue_depth.sh@39 -- # killprocess 69782 00:21:38.975 09:50:02 -- common/autotest_common.sh@936 -- # '[' -z 69782 ']' 00:21:38.975 09:50:02 -- common/autotest_common.sh@940 -- # kill -0 69782 00:21:38.975 09:50:02 -- common/autotest_common.sh@941 -- # uname 00:21:38.975 09:50:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.975 09:50:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69782 00:21:38.975 killing process with pid 69782 00:21:38.975 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.975 00:21:38.975 Latency(us) 00:21:38.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.975 =================================================================================================================== 00:21:38.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.975 09:50:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:38.975 09:50:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:38.975 09:50:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69782' 00:21:38.975 09:50:02 -- common/autotest_common.sh@955 -- # kill 69782 00:21:38.975 09:50:02 -- common/autotest_common.sh@960 -- # wait 69782 00:21:40.371 09:50:03 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:40.371 09:50:03 -- target/queue_depth.sh@43 -- # nvmftestfini 00:21:40.371 09:50:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:40.371 09:50:03 -- nvmf/common.sh@117 -- # sync 00:21:40.371 09:50:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.371 09:50:03 -- nvmf/common.sh@120 -- # set +e 00:21:40.371 09:50:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.371 09:50:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.371 rmmod nvme_tcp 00:21:40.371 rmmod nvme_fabrics 00:21:40.371 rmmod nvme_keyring 00:21:40.371 09:50:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.371 09:50:03 -- nvmf/common.sh@124 -- # set -e 00:21:40.371 09:50:03 -- nvmf/common.sh@125 -- # return 0 00:21:40.371 09:50:03 -- nvmf/common.sh@478 -- # '[' -n 69750 ']' 00:21:40.371 09:50:03 -- nvmf/common.sh@479 -- # killprocess 69750 00:21:40.371 09:50:03 -- common/autotest_common.sh@936 -- # '[' -z 69750 ']' 00:21:40.371 09:50:03 -- common/autotest_common.sh@940 -- # kill -0 69750 00:21:40.371 09:50:03 -- common/autotest_common.sh@941 -- # uname 00:21:40.371 09:50:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.371 09:50:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69750 00:21:40.371 killing process with pid 69750 00:21:40.371 09:50:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:40.371 09:50:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:40.371 09:50:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69750' 00:21:40.371 09:50:03 -- common/autotest_common.sh@955 -- # kill 69750 00:21:40.371 09:50:03 -- common/autotest_common.sh@960 -- # wait 69750 00:21:41.748 09:50:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:41.748 09:50:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:41.748 09:50:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:41.748 09:50:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.748 09:50:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.748 09:50:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.748 09:50:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.748 09:50:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.748 09:50:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:41.748 ************************************ 00:21:41.748 END TEST nvmf_queue_depth 00:21:41.748 ************************************ 00:21:41.748 00:21:41.748 real 0m15.781s 00:21:41.748 user 0m26.711s 00:21:41.748 sys 0m2.333s 00:21:41.748 09:50:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:41.748 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:21:41.748 09:50:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:41.748 09:50:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:41.748 09:50:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.748 09:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:41.748 ************************************ 00:21:41.748 START TEST nvmf_multipath 00:21:41.748 ************************************ 00:21:41.748 09:50:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:42.007 * Looking for test storage... 00:21:42.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:42.007 09:50:05 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.007 09:50:05 -- nvmf/common.sh@7 -- # uname -s 00:21:42.007 09:50:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.007 09:50:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.007 09:50:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.007 09:50:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.007 09:50:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.007 09:50:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.007 09:50:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.007 09:50:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.007 09:50:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.007 09:50:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.007 09:50:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:21:42.007 09:50:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:21:42.007 09:50:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.007 09:50:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.007 09:50:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.007 09:50:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.007 09:50:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.007 09:50:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.007 09:50:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.007 09:50:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.007 09:50:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.007 09:50:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.007 09:50:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.007 09:50:05 -- paths/export.sh@5 -- # export PATH 00:21:42.007 09:50:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.007 09:50:05 -- nvmf/common.sh@47 -- # : 0 00:21:42.007 09:50:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.007 09:50:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.007 09:50:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.007 09:50:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.007 09:50:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.007 09:50:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.007 09:50:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.007 09:50:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.007 09:50:05 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.007 09:50:05 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.007 09:50:05 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:42.007 09:50:05 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.007 09:50:05 -- target/multipath.sh@43 -- # nvmftestinit 00:21:42.007 09:50:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:42.007 09:50:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.007 09:50:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:42.007 09:50:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:42.007 09:50:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:42.007 09:50:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.007 09:50:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.007 09:50:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.007 09:50:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:42.007 09:50:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:42.007 09:50:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:42.007 09:50:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:42.007 09:50:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:42.007 09:50:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:42.007 09:50:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.007 09:50:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.007 09:50:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:42.007 09:50:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:42.007 09:50:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.007 09:50:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.007 09:50:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.007 09:50:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.007 09:50:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.007 09:50:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.007 09:50:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.007 09:50:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.007 09:50:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:42.007 09:50:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:42.007 Cannot find device "nvmf_tgt_br" 00:21:42.007 09:50:05 -- nvmf/common.sh@155 -- # true 00:21:42.007 09:50:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.007 Cannot find device "nvmf_tgt_br2" 00:21:42.007 09:50:05 -- nvmf/common.sh@156 -- # true 00:21:42.007 09:50:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:42.007 09:50:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:42.007 Cannot find device "nvmf_tgt_br" 00:21:42.007 09:50:05 -- nvmf/common.sh@158 -- # true 00:21:42.007 09:50:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:42.007 Cannot find device "nvmf_tgt_br2" 00:21:42.007 09:50:05 -- nvmf/common.sh@159 -- # true 00:21:42.007 09:50:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:42.008 09:50:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:42.008 09:50:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.008 09:50:05 -- nvmf/common.sh@162 -- # true 00:21:42.008 09:50:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.008 09:50:05 -- nvmf/common.sh@163 -- # true 00:21:42.008 09:50:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.008 09:50:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:42.008 09:50:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:42.008 09:50:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:42.008 09:50:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:42.266 09:50:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:42.266 09:50:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:42.266 09:50:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:42.266 09:50:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:42.266 09:50:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:42.266 09:50:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:42.266 09:50:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:42.266 09:50:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:42.266 09:50:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:42.266 09:50:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:42.266 09:50:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:42.266 09:50:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:42.266 09:50:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:42.266 09:50:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:42.266 09:50:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:42.266 09:50:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:42.266 09:50:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:42.266 09:50:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:42.266 09:50:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:42.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:42.266 00:21:42.266 --- 10.0.0.2 ping statistics --- 00:21:42.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.266 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:42.266 09:50:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:42.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:42.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:21:42.266 00:21:42.266 --- 10.0.0.3 ping statistics --- 00:21:42.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.266 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:42.266 09:50:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:42.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:21:42.266 00:21:42.266 --- 10.0.0.1 ping statistics --- 00:21:42.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.266 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:42.266 09:50:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.266 09:50:05 -- nvmf/common.sh@422 -- # return 0 00:21:42.266 09:50:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:42.266 09:50:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.266 09:50:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:42.266 09:50:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:42.266 09:50:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.266 09:50:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:42.266 09:50:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:42.266 09:50:05 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:21:42.266 09:50:05 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:21:42.266 09:50:05 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:21:42.266 09:50:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:42.266 09:50:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:42.266 09:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:42.266 09:50:05 -- nvmf/common.sh@470 -- # nvmfpid=70139 00:21:42.266 09:50:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:42.266 09:50:05 -- nvmf/common.sh@471 -- # waitforlisten 70139 00:21:42.266 09:50:05 -- common/autotest_common.sh@817 -- # '[' -z 70139 ']' 00:21:42.266 09:50:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.266 09:50:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.266 09:50:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.266 09:50:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.266 09:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:42.587 [2024-04-18 09:50:05.717365] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:42.587 [2024-04-18 09:50:05.717550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.587 [2024-04-18 09:50:05.900171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.861 [2024-04-18 09:50:06.198310] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.861 [2024-04-18 09:50:06.198374] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.861 [2024-04-18 09:50:06.198395] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.861 [2024-04-18 09:50:06.198408] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.861 [2024-04-18 09:50:06.198422] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.861 [2024-04-18 09:50:06.198825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.861 [2024-04-18 09:50:06.199074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.861 [2024-04-18 09:50:06.199217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.861 [2024-04-18 09:50:06.199260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.429 09:50:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:43.429 09:50:06 -- common/autotest_common.sh@850 -- # return 0 00:21:43.429 09:50:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:43.429 09:50:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:43.429 09:50:06 -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 09:50:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.429 09:50:06 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:43.688 [2024-04-18 09:50:06.916667] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.688 09:50:06 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:43.947 Malloc0 00:21:43.947 09:50:07 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:21:44.206 09:50:07 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.465 09:50:07 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.724 [2024-04-18 09:50:07.969029] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.724 09:50:07 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:44.983 [2024-04-18 09:50:08.193326] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:44.983 09:50:08 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:21:44.983 09:50:08 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:21:45.241 09:50:08 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:21:45.241 09:50:08 -- common/autotest_common.sh@1184 -- # local i=0 00:21:45.241 09:50:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:45.241 09:50:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:45.241 09:50:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:47.214 09:50:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:47.214 09:50:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:47.214 09:50:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:47.214 09:50:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:47.214 09:50:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:47.214 09:50:10 -- common/autotest_common.sh@1194 -- # return 0 00:21:47.214 09:50:10 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:21:47.214 09:50:10 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:21:47.214 09:50:10 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:21:47.214 09:50:10 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:21:47.214 09:50:10 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:21:47.214 09:50:10 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:21:47.214 09:50:10 -- target/multipath.sh@38 -- # return 0 00:21:47.214 09:50:10 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:21:47.214 09:50:10 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:21:47.214 09:50:10 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:21:47.214 09:50:10 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:21:47.214 09:50:10 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:21:47.214 09:50:10 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:21:47.214 09:50:10 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:21:47.214 09:50:10 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:47.214 09:50:10 -- target/multipath.sh@22 -- # local timeout=20 00:21:47.214 09:50:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:47.214 09:50:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:47.214 09:50:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:47.214 09:50:10 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:21:47.214 09:50:10 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:47.214 09:50:10 -- target/multipath.sh@22 -- # local timeout=20 00:21:47.214 09:50:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:47.214 09:50:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:47.214 09:50:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:47.214 09:50:10 -- target/multipath.sh@85 -- # echo numa 00:21:47.214 09:50:10 -- target/multipath.sh@88 -- # fio_pid=70224 00:21:47.214 09:50:10 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:47.214 09:50:10 -- target/multipath.sh@90 -- # sleep 1 00:21:47.214 [global] 00:21:47.214 thread=1 00:21:47.214 invalidate=1 00:21:47.214 rw=randrw 00:21:47.214 time_based=1 00:21:47.214 runtime=6 00:21:47.214 ioengine=libaio 00:21:47.214 direct=1 00:21:47.214 bs=4096 00:21:47.214 iodepth=128 00:21:47.215 norandommap=0 00:21:47.215 numjobs=1 00:21:47.215 00:21:47.215 verify_dump=1 00:21:47.215 verify_backlog=512 00:21:47.215 verify_state_save=0 00:21:47.215 do_verify=1 00:21:47.215 verify=crc32c-intel 00:21:47.215 [job0] 00:21:47.215 filename=/dev/nvme0n1 00:21:47.215 Could not set queue depth (nvme0n1) 00:21:47.490 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:47.490 fio-3.35 00:21:47.490 Starting 1 thread 00:21:48.459 09:50:11 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:48.459 09:50:11 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:48.807 09:50:12 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:21:48.807 09:50:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:48.807 09:50:12 -- target/multipath.sh@22 -- # local timeout=20 00:21:48.807 09:50:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:48.807 09:50:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:48.807 09:50:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:48.807 09:50:12 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:21:48.807 09:50:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:48.807 09:50:12 -- target/multipath.sh@22 -- # local timeout=20 00:21:48.807 09:50:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:48.807 09:50:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:48.807 09:50:12 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:48.807 09:50:12 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:49.096 09:50:12 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:49.408 09:50:12 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:21:49.408 09:50:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:49.408 09:50:12 -- target/multipath.sh@22 -- # local timeout=20 00:21:49.408 09:50:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:49.408 09:50:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:49.408 09:50:12 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:49.408 09:50:12 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:21:49.408 09:50:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:49.408 09:50:12 -- target/multipath.sh@22 -- # local timeout=20 00:21:49.408 09:50:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:49.408 09:50:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:49.408 09:50:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:49.408 09:50:12 -- target/multipath.sh@104 -- # wait 70224 00:21:53.615 00:21:53.615 job0: (groupid=0, jobs=1): err= 0: pid=70251: Thu Apr 18 09:50:16 2024 00:21:53.615 read: IOPS=8271, BW=32.3MiB/s (33.9MB/s)(194MiB/6008msec) 00:21:53.615 slat (usec): min=4, max=9939, avg=73.86, stdev=306.54 00:21:53.615 clat (usec): min=1947, max=23457, avg=10704.27, stdev=1991.34 00:21:53.616 lat (usec): min=1958, max=23619, avg=10778.13, stdev=1996.37 00:21:53.616 clat percentiles (usec): 00:21:53.616 | 1.00th=[ 5211], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9634], 00:21:53.616 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:21:53.616 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12387], 95.00th=[15401], 00:21:53.616 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19268], 99.95th=[20055], 00:21:53.616 | 99.99th=[20841] 00:21:53.616 bw ( KiB/s): min= 4544, max=20952, per=50.15%, avg=16594.42, stdev=5767.12, samples=12 00:21:53.616 iops : min= 1136, max= 5238, avg=4148.50, stdev=1441.83, samples=12 00:21:53.616 write: IOPS=4700, BW=18.4MiB/s (19.3MB/s)(97.4MiB/5303msec); 0 zone resets 00:21:53.616 slat (usec): min=6, max=5468, avg=82.42, stdev=220.21 00:21:53.616 clat (usec): min=2859, max=18880, avg=9233.28, stdev=1697.66 00:21:53.616 lat (usec): min=2910, max=18905, avg=9315.70, stdev=1704.17 00:21:53.616 clat percentiles (usec): 00:21:53.616 | 1.00th=[ 4080], 5.00th=[ 5342], 10.00th=[ 7177], 20.00th=[ 8586], 00:21:53.616 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:21:53.616 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[11076], 00:21:53.616 | 99.00th=[14484], 99.50th=[15270], 99.90th=[17433], 99.95th=[17695], 00:21:53.616 | 99.99th=[18220] 00:21:53.616 bw ( KiB/s): min= 4760, max=20744, per=88.14%, avg=16573.83, stdev=5641.21, samples=12 00:21:53.616 iops : min= 1190, max= 5186, avg=4143.42, stdev=1410.40, samples=12 00:21:53.616 lat (msec) : 2=0.01%, 4=0.36%, 10=43.52%, 20=56.09%, 50=0.03% 00:21:53.616 cpu : usr=4.99%, sys=18.03%, ctx=4491, majf=0, minf=72 00:21:53.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:53.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.616 issued rwts: total=49695,24928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.616 00:21:53.616 Run status group 0 (all jobs): 00:21:53.616 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=194MiB (204MB), run=6008-6008msec 00:21:53.616 WRITE: bw=18.4MiB/s (19.3MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=97.4MiB (102MB), run=5303-5303msec 00:21:53.616 00:21:53.616 Disk stats (read/write): 00:21:53.616 nvme0n1: ios=48865/24607, merge=0/0, ticks=504309/215145, in_queue=719454, util=98.70% 00:21:53.616 09:50:16 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:53.875 09:50:17 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:54.134 09:50:17 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:21:54.134 09:50:17 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:54.134 09:50:17 -- target/multipath.sh@22 -- # local timeout=20 00:21:54.134 09:50:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:54.134 09:50:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:54.134 09:50:17 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:54.134 09:50:17 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:21:54.134 09:50:17 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:54.134 09:50:17 -- target/multipath.sh@22 -- # local timeout=20 00:21:54.134 09:50:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:54.134 09:50:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:54.134 09:50:17 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:54.134 09:50:17 -- target/multipath.sh@113 -- # echo round-robin 00:21:54.134 09:50:17 -- target/multipath.sh@116 -- # fio_pid=70325 00:21:54.134 09:50:17 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:54.134 09:50:17 -- target/multipath.sh@118 -- # sleep 1 00:21:54.134 [global] 00:21:54.134 thread=1 00:21:54.134 invalidate=1 00:21:54.134 rw=randrw 00:21:54.134 time_based=1 00:21:54.134 runtime=6 00:21:54.134 ioengine=libaio 00:21:54.134 direct=1 00:21:54.134 bs=4096 00:21:54.134 iodepth=128 00:21:54.134 norandommap=0 00:21:54.134 numjobs=1 00:21:54.134 00:21:54.134 verify_dump=1 00:21:54.134 verify_backlog=512 00:21:54.134 verify_state_save=0 00:21:54.134 do_verify=1 00:21:54.134 verify=crc32c-intel 00:21:54.134 [job0] 00:21:54.134 filename=/dev/nvme0n1 00:21:54.134 Could not set queue depth (nvme0n1) 00:21:54.134 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:54.134 fio-3.35 00:21:54.134 Starting 1 thread 00:21:55.069 09:50:18 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:55.328 09:50:18 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:55.586 09:50:18 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:21:55.586 09:50:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:55.586 09:50:18 -- target/multipath.sh@22 -- # local timeout=20 00:21:55.586 09:50:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:55.586 09:50:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:55.586 09:50:18 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:55.586 09:50:18 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:21:55.586 09:50:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:55.586 09:50:18 -- target/multipath.sh@22 -- # local timeout=20 00:21:55.586 09:50:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:55.586 09:50:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:55.586 09:50:18 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:55.586 09:50:18 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:56.153 09:50:19 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:56.153 09:50:19 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:21:56.153 09:50:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:56.153 09:50:19 -- target/multipath.sh@22 -- # local timeout=20 00:21:56.153 09:50:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:56.153 09:50:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:56.153 09:50:19 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:56.153 09:50:19 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:21:56.153 09:50:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:56.153 09:50:19 -- target/multipath.sh@22 -- # local timeout=20 00:21:56.153 09:50:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:56.153 09:50:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:56.153 09:50:19 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:56.153 09:50:19 -- target/multipath.sh@132 -- # wait 70325 00:22:00.341 00:22:00.341 job0: (groupid=0, jobs=1): err= 0: pid=70346: Thu Apr 18 09:50:23 2024 00:22:00.341 read: IOPS=9231, BW=36.1MiB/s (37.8MB/s)(217MiB/6006msec) 00:22:00.341 slat (usec): min=5, max=13148, avg=57.37, stdev=253.54 00:22:00.341 clat (usec): min=390, max=28932, avg=9752.29, stdev=2726.80 00:22:00.341 lat (usec): min=401, max=28952, avg=9809.66, stdev=2746.00 00:22:00.341 clat percentiles (usec): 00:22:00.341 | 1.00th=[ 2180], 5.00th=[ 4490], 10.00th=[ 5932], 20.00th=[ 7963], 00:22:00.341 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10552], 00:22:00.341 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11863], 95.00th=[13566], 00:22:00.341 | 99.00th=[16581], 99.50th=[17433], 99.90th=[27657], 99.95th=[28705], 00:22:00.341 | 99.99th=[28967] 00:22:00.341 bw ( KiB/s): min= 1992, max=29285, per=50.03%, avg=18475.08, stdev=9043.09, samples=12 00:22:00.341 iops : min= 498, max= 7321, avg=4618.75, stdev=2260.75, samples=12 00:22:00.341 write: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(109MiB/4809msec); 0 zone resets 00:22:00.341 slat (usec): min=13, max=3198, avg=63.09, stdev=176.78 00:22:00.341 clat (usec): min=295, max=28673, avg=7968.86, stdev=2577.43 00:22:00.341 lat (usec): min=325, max=28700, avg=8031.96, stdev=2598.24 00:22:00.341 clat percentiles (usec): 00:22:00.341 | 1.00th=[ 2040], 5.00th=[ 3359], 10.00th=[ 4146], 20.00th=[ 5342], 00:22:00.341 | 30.00th=[ 6456], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9241], 00:22:00.341 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:22:00.341 | 99.00th=[13698], 99.50th=[14746], 99.90th=[16450], 99.95th=[17433], 00:22:00.341 | 99.99th=[27132] 00:22:00.341 bw ( KiB/s): min= 2272, max=30435, per=79.96%, avg=18520.92, stdev=9076.51, samples=12 00:22:00.341 iops : min= 568, max= 7608, avg=4630.33, stdev=2269.09, samples=12 00:22:00.341 lat (usec) : 500=0.02%, 750=0.06%, 1000=0.10% 00:22:00.341 lat (msec) : 2=0.73%, 4=4.51%, 10=48.96%, 20=45.45%, 50=0.17% 00:22:00.341 cpu : usr=4.78%, sys=20.19%, ctx=5059, majf=0, minf=151 00:22:00.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:00.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.341 issued rwts: total=55444,27846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.341 00:22:00.341 Run status group 0 (all jobs): 00:22:00.341 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=217MiB (227MB), run=6006-6006msec 00:22:00.341 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=109MiB (114MB), run=4809-4809msec 00:22:00.341 00:22:00.341 Disk stats (read/write): 00:22:00.341 nvme0n1: ios=54734/27398, merge=0/0, ticks=512373/205347, in_queue=717720, util=98.70% 00:22:00.341 09:50:23 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:00.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:22:00.341 09:50:23 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:00.341 09:50:23 -- common/autotest_common.sh@1205 -- # local i=0 00:22:00.341 09:50:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:00.341 09:50:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:00.599 09:50:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:00.599 09:50:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:00.599 09:50:23 -- common/autotest_common.sh@1217 -- # return 0 00:22:00.599 09:50:23 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.858 09:50:24 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:22:00.858 09:50:24 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:22:00.858 09:50:24 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:22:00.858 09:50:24 -- target/multipath.sh@144 -- # nvmftestfini 00:22:00.858 09:50:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:00.858 09:50:24 -- nvmf/common.sh@117 -- # sync 00:22:00.858 09:50:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.858 09:50:24 -- nvmf/common.sh@120 -- # set +e 00:22:00.858 09:50:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.858 09:50:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.858 rmmod nvme_tcp 00:22:00.858 rmmod nvme_fabrics 00:22:00.858 rmmod nvme_keyring 00:22:00.858 09:50:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.858 09:50:24 -- nvmf/common.sh@124 -- # set -e 00:22:00.858 09:50:24 -- nvmf/common.sh@125 -- # return 0 00:22:00.858 09:50:24 -- nvmf/common.sh@478 -- # '[' -n 70139 ']' 00:22:00.858 09:50:24 -- nvmf/common.sh@479 -- # killprocess 70139 00:22:00.858 09:50:24 -- common/autotest_common.sh@936 -- # '[' -z 70139 ']' 00:22:00.858 09:50:24 -- common/autotest_common.sh@940 -- # kill -0 70139 00:22:00.858 09:50:24 -- common/autotest_common.sh@941 -- # uname 00:22:00.858 09:50:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.858 09:50:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70139 00:22:00.858 09:50:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:00.858 09:50:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:00.858 09:50:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70139' 00:22:00.858 killing process with pid 70139 00:22:00.858 09:50:24 -- common/autotest_common.sh@955 -- # kill 70139 00:22:00.858 09:50:24 -- common/autotest_common.sh@960 -- # wait 70139 00:22:02.234 09:50:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:02.234 09:50:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:02.234 09:50:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:02.234 09:50:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.234 09:50:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.234 09:50:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.234 09:50:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.234 09:50:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.234 09:50:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:02.234 00:22:02.234 real 0m20.439s 00:22:02.234 user 1m15.021s 00:22:02.234 sys 0m8.823s 00:22:02.234 09:50:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:02.234 09:50:25 -- common/autotest_common.sh@10 -- # set +x 00:22:02.234 ************************************ 00:22:02.234 END TEST nvmf_multipath 00:22:02.234 ************************************ 00:22:02.234 09:50:25 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:02.234 09:50:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:02.234 09:50:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:02.234 09:50:25 -- common/autotest_common.sh@10 -- # set +x 00:22:02.494 ************************************ 00:22:02.494 START TEST nvmf_zcopy 00:22:02.494 ************************************ 00:22:02.494 09:50:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:02.494 * Looking for test storage... 00:22:02.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:02.494 09:50:25 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.494 09:50:25 -- nvmf/common.sh@7 -- # uname -s 00:22:02.494 09:50:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.494 09:50:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.494 09:50:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.494 09:50:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.494 09:50:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.494 09:50:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.494 09:50:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.494 09:50:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.494 09:50:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.494 09:50:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.494 09:50:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:22:02.494 09:50:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:22:02.494 09:50:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.494 09:50:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.494 09:50:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.494 09:50:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.494 09:50:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.494 09:50:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.494 09:50:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.494 09:50:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.494 09:50:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.494 09:50:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.494 09:50:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.494 09:50:25 -- paths/export.sh@5 -- # export PATH 00:22:02.494 09:50:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.494 09:50:25 -- nvmf/common.sh@47 -- # : 0 00:22:02.494 09:50:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.494 09:50:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.494 09:50:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.494 09:50:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.494 09:50:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.494 09:50:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.494 09:50:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.494 09:50:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.494 09:50:25 -- target/zcopy.sh@12 -- # nvmftestinit 00:22:02.494 09:50:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:02.494 09:50:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.494 09:50:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:02.494 09:50:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:02.494 09:50:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:02.495 09:50:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.495 09:50:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.495 09:50:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.495 09:50:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:02.495 09:50:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:02.495 09:50:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:02.495 09:50:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:02.495 09:50:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:02.495 09:50:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:02.495 09:50:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.495 09:50:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.495 09:50:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:02.495 09:50:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:02.495 09:50:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.495 09:50:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.495 09:50:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.495 09:50:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.495 09:50:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.495 09:50:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.495 09:50:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.495 09:50:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.495 09:50:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:02.495 09:50:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:02.495 Cannot find device "nvmf_tgt_br" 00:22:02.495 09:50:25 -- nvmf/common.sh@155 -- # true 00:22:02.495 09:50:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.495 Cannot find device "nvmf_tgt_br2" 00:22:02.495 09:50:25 -- nvmf/common.sh@156 -- # true 00:22:02.495 09:50:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:02.495 09:50:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:02.495 Cannot find device "nvmf_tgt_br" 00:22:02.495 09:50:25 -- nvmf/common.sh@158 -- # true 00:22:02.495 09:50:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:02.495 Cannot find device "nvmf_tgt_br2" 00:22:02.495 09:50:25 -- nvmf/common.sh@159 -- # true 00:22:02.495 09:50:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:02.754 09:50:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:02.754 09:50:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.754 09:50:25 -- nvmf/common.sh@162 -- # true 00:22:02.754 09:50:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.754 09:50:25 -- nvmf/common.sh@163 -- # true 00:22:02.754 09:50:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.754 09:50:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.754 09:50:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.754 09:50:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.754 09:50:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.754 09:50:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.754 09:50:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.754 09:50:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:02.754 09:50:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:02.754 09:50:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:02.754 09:50:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:02.754 09:50:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:02.754 09:50:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:02.754 09:50:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:02.754 09:50:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:02.754 09:50:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:02.754 09:50:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:02.754 09:50:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:02.754 09:50:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:02.754 09:50:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:02.754 09:50:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.754 09:50:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.754 09:50:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.754 09:50:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:02.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:22:02.754 00:22:02.754 --- 10.0.0.2 ping statistics --- 00:22:02.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.754 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:02.754 09:50:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:02.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:02.754 00:22:02.754 --- 10.0.0.3 ping statistics --- 00:22:02.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.754 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:02.754 09:50:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:02.754 00:22:02.754 --- 10.0.0.1 ping statistics --- 00:22:02.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.754 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:02.754 09:50:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.754 09:50:26 -- nvmf/common.sh@422 -- # return 0 00:22:02.754 09:50:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:02.754 09:50:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.754 09:50:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:02.754 09:50:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:02.754 09:50:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.754 09:50:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:02.754 09:50:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:02.754 09:50:26 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:02.754 09:50:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:02.754 09:50:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:02.754 09:50:26 -- common/autotest_common.sh@10 -- # set +x 00:22:02.754 09:50:26 -- nvmf/common.sh@470 -- # nvmfpid=70618 00:22:02.754 09:50:26 -- nvmf/common.sh@471 -- # waitforlisten 70618 00:22:02.754 09:50:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.013 09:50:26 -- common/autotest_common.sh@817 -- # '[' -z 70618 ']' 00:22:03.013 09:50:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.013 09:50:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:03.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.013 09:50:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.013 09:50:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:03.013 09:50:26 -- common/autotest_common.sh@10 -- # set +x 00:22:03.013 [2024-04-18 09:50:26.256343] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:03.013 [2024-04-18 09:50:26.256505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.272 [2024-04-18 09:50:26.433747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.530 [2024-04-18 09:50:26.713255] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.531 [2024-04-18 09:50:26.713334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.531 [2024-04-18 09:50:26.713357] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.531 [2024-04-18 09:50:26.713385] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.531 [2024-04-18 09:50:26.713402] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.531 [2024-04-18 09:50:26.713448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.790 09:50:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:03.790 09:50:27 -- common/autotest_common.sh@850 -- # return 0 00:22:03.790 09:50:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:03.790 09:50:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:03.790 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:03.790 09:50:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.790 09:50:27 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:03.790 09:50:27 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:03.790 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.790 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:03.790 [2024-04-18 09:50:27.173051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.790 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.790 09:50:27 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:03.790 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.790 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.048 09:50:27 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.048 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.048 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 [2024-04-18 09:50:27.189215] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.048 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.048 09:50:27 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:04.048 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.048 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.048 09:50:27 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:04.048 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.048 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 malloc0 00:22:04.048 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.048 09:50:27 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.048 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.048 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.048 09:50:27 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:04.048 09:50:27 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:04.048 09:50:27 -- nvmf/common.sh@521 -- # config=() 00:22:04.048 09:50:27 -- nvmf/common.sh@521 -- # local subsystem config 00:22:04.048 09:50:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:04.048 09:50:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:04.048 { 00:22:04.048 "params": { 00:22:04.048 "name": "Nvme$subsystem", 00:22:04.048 "trtype": "$TEST_TRANSPORT", 00:22:04.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.048 "adrfam": "ipv4", 00:22:04.048 "trsvcid": "$NVMF_PORT", 00:22:04.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.048 "hdgst": ${hdgst:-false}, 00:22:04.048 "ddgst": ${ddgst:-false} 00:22:04.048 }, 00:22:04.048 "method": "bdev_nvme_attach_controller" 00:22:04.048 } 00:22:04.048 EOF 00:22:04.048 )") 00:22:04.048 09:50:27 -- nvmf/common.sh@543 -- # cat 00:22:04.048 09:50:27 -- nvmf/common.sh@545 -- # jq . 00:22:04.048 09:50:27 -- nvmf/common.sh@546 -- # IFS=, 00:22:04.048 09:50:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:04.048 "params": { 00:22:04.048 "name": "Nvme1", 00:22:04.048 "trtype": "tcp", 00:22:04.048 "traddr": "10.0.0.2", 00:22:04.048 "adrfam": "ipv4", 00:22:04.048 "trsvcid": "4420", 00:22:04.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.048 "hdgst": false, 00:22:04.048 "ddgst": false 00:22:04.048 }, 00:22:04.048 "method": "bdev_nvme_attach_controller" 00:22:04.048 }' 00:22:04.048 [2024-04-18 09:50:27.356209] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:04.048 [2024-04-18 09:50:27.356370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70648 ] 00:22:04.307 [2024-04-18 09:50:27.533690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.573 [2024-04-18 09:50:27.804753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.851 Running I/O for 10 seconds... 00:22:17.068 00:22:17.068 Latency(us) 00:22:17.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:17.068 Verification LBA range: start 0x0 length 0x1000 00:22:17.068 Nvme1n1 : 10.02 4299.34 33.59 0.00 0.00 29690.47 4259.84 39321.60 00:22:17.068 =================================================================================================================== 00:22:17.068 Total : 4299.34 33.59 0.00 0.00 29690.47 4259.84 39321.60 00:22:17.068 09:50:39 -- target/zcopy.sh@39 -- # perfpid=70781 00:22:17.068 09:50:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:22:17.068 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:22:17.068 09:50:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:17.068 09:50:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:17.068 09:50:39 -- nvmf/common.sh@521 -- # config=() 00:22:17.068 09:50:39 -- nvmf/common.sh@521 -- # local subsystem config 00:22:17.068 09:50:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.068 09:50:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.068 { 00:22:17.068 "params": { 00:22:17.068 "name": "Nvme$subsystem", 00:22:17.068 "trtype": "$TEST_TRANSPORT", 00:22:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.068 "adrfam": "ipv4", 00:22:17.068 "trsvcid": "$NVMF_PORT", 00:22:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.068 "hdgst": ${hdgst:-false}, 00:22:17.068 "ddgst": ${ddgst:-false} 00:22:17.068 }, 00:22:17.068 "method": "bdev_nvme_attach_controller" 00:22:17.068 } 00:22:17.068 EOF 00:22:17.068 )") 00:22:17.068 [2024-04-18 09:50:39.171972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.172027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 09:50:39 -- nvmf/common.sh@543 -- # cat 00:22:17.068 09:50:39 -- nvmf/common.sh@545 -- # jq . 00:22:17.068 09:50:39 -- nvmf/common.sh@546 -- # IFS=, 00:22:17.068 09:50:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:17.068 "params": { 00:22:17.068 "name": "Nvme1", 00:22:17.068 "trtype": "tcp", 00:22:17.068 "traddr": "10.0.0.2", 00:22:17.068 "adrfam": "ipv4", 00:22:17.068 "trsvcid": "4420", 00:22:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.068 "hdgst": false, 00:22:17.068 "ddgst": false 00:22:17.068 }, 00:22:17.068 "method": "bdev_nvme_attach_controller" 00:22:17.068 }' 00:22:17.068 [2024-04-18 09:50:39.183880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.183923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.195887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.195927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.207875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.207915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.219907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.219947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.231934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.231981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.243897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.243937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.255925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.255966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.267917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.267954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.279899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.279941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.290493] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:17.068 [2024-04-18 09:50:39.290635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70781 ] 00:22:17.068 [2024-04-18 09:50:39.292005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.292055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.303924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.303971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.315945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.315985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.327977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.328024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.339948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.339990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.351952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.351996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.363961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.363999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.375964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.376004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.387962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.387998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.399958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.400002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.411975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.412012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.423969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.424012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.435963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.436000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.448130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.448224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.460020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.460068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.464186] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.068 [2024-04-18 09:50:39.472008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.472058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.484172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.484244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.496016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.496063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.508031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.508070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.520043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.520084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.532026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.532063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.544115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.544188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.556165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.556219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.568065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.568112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.580110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.580173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.592101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.592162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.604166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.604226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.616169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.616241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.628061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.628102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.640089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.640148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.652090] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.652143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.664103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.664160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.676102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.676156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.688083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.688140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.700113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.700167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.707705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.068 [2024-04-18 09:50:39.712118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.712177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.724178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.724234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.736206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.736268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.748120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.748173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.760103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.760160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.772154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.772194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.784119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.784179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.796236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.796295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.808278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.808350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.820238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.820284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.832182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.832227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.844206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.844249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.856192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.856242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.868261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.868327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.880232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.880291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.892211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.892252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.904212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.904256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.916193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.916234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.928256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.928324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.068 [2024-04-18 09:50:39.940274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.068 [2024-04-18 09:50:39.940327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:39.952219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:39.952265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:39.964214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:39.964255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:39.976240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:39.976293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:39.988293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:39.988347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.000304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.000367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.012219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.012260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.020232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.020277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.032246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.032287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.044245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.044287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.056272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.056320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.068265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.068312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.080330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.080381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.092305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.092348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.104301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.104344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 Running I/O for 5 seconds... 00:22:17.069 [2024-04-18 09:50:40.116369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.116426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.133040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.133089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.147736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.147786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.164141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.164205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.181916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.181958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.196168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.196218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.211967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.212012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.230102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.230172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.244439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.244499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.258576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.258626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.275965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.276011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.290520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.290577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.306347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.306397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.322361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.322411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.338055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.338100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.352189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.352236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.371011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.371096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.385466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.385519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.401466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.401507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.421108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.421179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.435305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.435348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.069 [2024-04-18 09:50:40.451977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.069 [2024-04-18 09:50:40.452029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.467837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.467881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.486306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.486355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.499658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.499718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.519955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.520024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.538895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.538951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.553690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.553750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.567825] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.567868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.368 [2024-04-18 09:50:40.582727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.368 [2024-04-18 09:50:40.582808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.600355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.600405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.617113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.617187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.632909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.632964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.650514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.650580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.664270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.664315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.680407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.680476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.696362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.696409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.709337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.709388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.725513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.725560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.739102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.739197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.369 [2024-04-18 09:50:40.755040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.369 [2024-04-18 09:50:40.755083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.770885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.770946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.787045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.787103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.800752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.800813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.816388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.816459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.832314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.832375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.848965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.849041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.867410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.867465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.881536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.881585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.899194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.899245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.916021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.916064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.934743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.934792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.948983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.949028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.965103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.965167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.981087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:40.981159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:40.999965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:41.000015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.632 [2024-04-18 09:50:41.014245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.632 [2024-04-18 09:50:41.014289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.030228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.030280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.046109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.046166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.061567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.061628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.077389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.077447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.095490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.095553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.109668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.109711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.125484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.125533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.141462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.141505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.157654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.157712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.175971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.176022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.189584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.189664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.209547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.209628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.225166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.225223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.237959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.238005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.254079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.254146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.891 [2024-04-18 09:50:41.271254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.891 [2024-04-18 09:50:41.271290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.288218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.288315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.305582] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.305644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.321861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.321922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.338555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.338598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.353927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.353990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.369730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.369800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.387425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.387488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.401093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.401152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.417334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.417379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.434714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.434757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.448214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.448257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.467659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.467707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.484938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.484983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.497904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.497949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.518092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.518168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.150 [2024-04-18 09:50:41.535083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.150 [2024-04-18 09:50:41.535156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.553617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.553670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.567635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.567681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.584523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.584603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.601084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.601197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.617708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.617764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.634777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.634828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.652325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.652377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.665982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.666024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.685321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.685378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.700841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.700898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.719118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.719220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.737147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.737228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.750002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.750063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.768068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.768122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.784584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.784638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.408 [2024-04-18 09:50:41.797755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.408 [2024-04-18 09:50:41.797828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.817309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.817365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.832005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.832065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.847759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.847832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.866890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.866952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.880638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.880692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.899609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.899700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.914666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.914725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.930693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.930749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.949352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.949412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.963036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.667 [2024-04-18 09:50:41.963117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.667 [2024-04-18 09:50:41.979558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.668 [2024-04-18 09:50:41.979645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.668 [2024-04-18 09:50:41.995531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.668 [2024-04-18 09:50:41.995600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.668 [2024-04-18 09:50:42.014336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.668 [2024-04-18 09:50:42.014396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.668 [2024-04-18 09:50:42.031793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.668 [2024-04-18 09:50:42.031849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.668 [2024-04-18 09:50:42.044855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.668 [2024-04-18 09:50:42.044910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.064347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.064399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.079477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.079554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.097674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.097738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.111601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.111657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.130225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.130296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.146962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.147024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.159606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.159667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.179015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.179080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.193822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.193877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.211907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.211957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.228889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.228952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.246532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.246582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.259675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.259730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.278568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.278620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.292950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.293013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.927 [2024-04-18 09:50:42.310845] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.927 [2024-04-18 09:50:42.310928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.325407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.325458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.343185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.343228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.357508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.357551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.376015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.376083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.392794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.392844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.408942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.408998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.422072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.422115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.440956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.441006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.458703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.458750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.471537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.471582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.490368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.490415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.507274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.507324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.523165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.523212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.539207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.539253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.551813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.551857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.185 [2024-04-18 09:50:42.571427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.185 [2024-04-18 09:50:42.571476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.588118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.588179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.604335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.604385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.620359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.620410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.633479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.633531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.652479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.652534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.666636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.666696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.683939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.683983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.700743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.700785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.716637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.716696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.732961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.733003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.746409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.746450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.765251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.765296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.779413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.779469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.797330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.797386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.811576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.811622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.443 [2024-04-18 09:50:42.829649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.443 [2024-04-18 09:50:42.829699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.846724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.846773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.863995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.864041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.877408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.877454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.896966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.897015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.914790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.914843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.928200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.928265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.944891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.944941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.959556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.959600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.976971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.977018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:42.990586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:42.990645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:43.008343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:43.008402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:43.025284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:43.025343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:43.038886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:43.038947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:43.056928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:43.056991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:43.072618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:43.072674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.700 [2024-04-18 09:50:43.091120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.700 [2024-04-18 09:50:43.091176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.104954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.105003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.122104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.122168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.136682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.136750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.152717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.152779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.170434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.170508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.184103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.184165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.202656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.202709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.217402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.217454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.235498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.235545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.253034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.253094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.266429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.266478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.286768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.286835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.304123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.304199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.317778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.317820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:19.958 [2024-04-18 09:50:43.336004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:19.958 [2024-04-18 09:50:43.336048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.215 [2024-04-18 09:50:43.352559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.215 [2024-04-18 09:50:43.352603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.215 [2024-04-18 09:50:43.370003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.215 [2024-04-18 09:50:43.370053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.215 [2024-04-18 09:50:43.382839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.215 [2024-04-18 09:50:43.382890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.402225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.402272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.416728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.416776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.432326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.432375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.450237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.450281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.463937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.463981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.481985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.482036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.499116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.499177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.512030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.512086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.531176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.531226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.545551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.545599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.563199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.563247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.579793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.579840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.216 [2024-04-18 09:50:43.596804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.216 [2024-04-18 09:50:43.596861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.612965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.613013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.628895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.628939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.644769] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.644813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.657245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.657286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.675501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.675546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.692785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.692827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.709451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.709494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.725784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.725829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.737765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.737809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.752446] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.752502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.767290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.767342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.782771] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.782830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.800345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.800388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.817511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.817558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.833632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.833681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.849551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.849598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.473 [2024-04-18 09:50:43.862704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.473 [2024-04-18 09:50:43.862750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.881781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.881838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.895958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.896006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.913941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.913994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.930662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.930717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.943371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.943419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.962780] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.962834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.977673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.977716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:43.995756] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:43.995800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.012893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.012936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.026225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.026267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.043906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.043951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.061057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.061100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.076780] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.076830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.089525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.089567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.108275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.108337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:20.753 [2024-04-18 09:50:44.121412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:20.753 [2024-04-18 09:50:44.121457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.140951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.141010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.155017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.155076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.173615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.173667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.191058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.191106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.204692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.204740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.222408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.222459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.235300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.235343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.254447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.254494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.268232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.268281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.286762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.286808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.300683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.300726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.318852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.318897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.335565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.335609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.351523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.351571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.364443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.364490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.384305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.384364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.402401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.402454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.026 [2024-04-18 09:50:44.415961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.026 [2024-04-18 09:50:44.416007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.435545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.435604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.453315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.453360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.466094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.466150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.485229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.485273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.502892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.502936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.517270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.517327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.535830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.535877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.550707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.550768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.566348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.566398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.582212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.582268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.598681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.598732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.612351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.612402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.628593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.628638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.642311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.642357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.659830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.659879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.284 [2024-04-18 09:50:44.673619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.284 [2024-04-18 09:50:44.673672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.689490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.689551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.708292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.708353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.722645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.722696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.736802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.736858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.752026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.752083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.767422] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.767484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.783089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.783156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.798835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.798888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.814748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.814807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.834023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.834083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.848477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.848528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.864915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.864970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.881337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.881384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.897324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.897367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.915329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.915372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.543 [2024-04-18 09:50:44.929201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.543 [2024-04-18 09:50:44.929247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:44.945836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:44.945891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:44.962174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:44.962228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:44.981004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:44.981070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:44.995370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:44.995422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.014173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.014219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.030015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.030061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.048981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.049034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.064330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.064377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.080147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.080196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.096748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.096798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.115981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.116033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.127958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.128002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 00:22:21.802 Latency(us) 00:22:21.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.802 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:21.802 Nvme1n1 : 5.01 8243.03 64.40 0.00 0.00 15505.60 5779.08 27763.43 00:22:21.802 =================================================================================================================== 00:22:21.802 Total : 8243.03 64.40 0.00 0.00 15505.60 5779.08 27763.43 00:22:21.802 [2024-04-18 09:50:45.139905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.139948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.151891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.151945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.163919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.163965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.175989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.176052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:21.802 [2024-04-18 09:50:45.187924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:21.802 [2024-04-18 09:50:45.187971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.061 [2024-04-18 09:50:45.199924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.199966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.211896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.211945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.223925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.223967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.235928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.235968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.248004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.248058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.260000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.260054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.271943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.271985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.283946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.283988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.295960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.296003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.307934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.307975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.319966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.320013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.331970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.332020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.343951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.343994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.355988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.356030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.367977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.368020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.379983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.380029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.392089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.392161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.403988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.404042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.416004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.416046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.428016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.428059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.440000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.440047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.062 [2024-04-18 09:50:45.452093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.062 [2024-04-18 09:50:45.452167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.464085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.464156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.475992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.476033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.488020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.488062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.500008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.500049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.512073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.512122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.524052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.524095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.536048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.536089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.321 [2024-04-18 09:50:45.548061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.321 [2024-04-18 09:50:45.548102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.556046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.556088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.564073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.564136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.572099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.572160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.584154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.584207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.596080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.596134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.608066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.608108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.620064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.620105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.632096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.632149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.644083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.644136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.656088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.656144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.668160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.668201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.680096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.680149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.692118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.692165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.322 [2024-04-18 09:50:45.704117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.322 [2024-04-18 09:50:45.704172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.581 [2024-04-18 09:50:45.716137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.581 [2024-04-18 09:50:45.716179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.581 [2024-04-18 09:50:45.728142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.581 [2024-04-18 09:50:45.728174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.581 [2024-04-18 09:50:45.740154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.581 [2024-04-18 09:50:45.740186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.752112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.752159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.764157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.764188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.776146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.776185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.788161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.788201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.800172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.800213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.812188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.812228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.824236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.824289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.836212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.836258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.848172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.848213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.860248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.860289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.872185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.872226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.884214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.884255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.896223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.896264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.912217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.912264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.920279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.920330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.928239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.928280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.936207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.936247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.948258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.948299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.960218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.960261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.582 [2024-04-18 09:50:45.972240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.582 [2024-04-18 09:50:45.972281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:45.984247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:45.984298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:45.996228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:45.996279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.008331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.008389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.020268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.020312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.032245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.032293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.044284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.044326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.056273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.056320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.068288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.068333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.080312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.080358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.092375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.092435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.104295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.104338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.116309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.116351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.128320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.128365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.140364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.140420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.152327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.152369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.164354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.164402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.176380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.176442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.188323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.188365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.841 [2024-04-18 09:50:46.200340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.841 [2024-04-18 09:50:46.200382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.842 [2024-04-18 09:50:46.212340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.842 [2024-04-18 09:50:46.212382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.842 [2024-04-18 09:50:46.224322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:22.842 [2024-04-18 09:50:46.224364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:23.099 [2024-04-18 09:50:46.236373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:23.099 [2024-04-18 09:50:46.236418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:23.099 [2024-04-18 09:50:46.248341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:23.099 [2024-04-18 09:50:46.248383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:23.099 [2024-04-18 09:50:46.260369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:23.099 [2024-04-18 09:50:46.260415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:23.099 [2024-04-18 09:50:46.272355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:23.099 [2024-04-18 09:50:46.272397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:23.099 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70781) - No such process 00:22:23.099 09:50:46 -- target/zcopy.sh@49 -- # wait 70781 00:22:23.099 09:50:46 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:23.099 09:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.099 09:50:46 -- common/autotest_common.sh@10 -- # set +x 00:22:23.099 09:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.099 09:50:46 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:23.099 09:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.099 09:50:46 -- common/autotest_common.sh@10 -- # set +x 00:22:23.099 delay0 00:22:23.099 09:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.099 09:50:46 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:23.099 09:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.099 09:50:46 -- common/autotest_common.sh@10 -- # set +x 00:22:23.099 09:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.099 09:50:46 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:23.358 [2024-04-18 09:50:46.533512] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:29.925 Initializing NVMe Controllers 00:22:29.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.925 Initialization complete. Launching workers. 00:22:29.925 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 83 00:22:29.925 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 370, failed to submit 33 00:22:29.925 success 255, unsuccess 115, failed 0 00:22:29.925 09:50:52 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:29.925 09:50:52 -- target/zcopy.sh@60 -- # nvmftestfini 00:22:29.925 09:50:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:29.925 09:50:52 -- nvmf/common.sh@117 -- # sync 00:22:29.925 09:50:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.925 09:50:52 -- nvmf/common.sh@120 -- # set +e 00:22:29.925 09:50:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.925 09:50:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.925 rmmod nvme_tcp 00:22:29.925 rmmod nvme_fabrics 00:22:29.925 rmmod nvme_keyring 00:22:29.925 09:50:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.925 09:50:52 -- nvmf/common.sh@124 -- # set -e 00:22:29.925 09:50:52 -- nvmf/common.sh@125 -- # return 0 00:22:29.925 09:50:52 -- nvmf/common.sh@478 -- # '[' -n 70618 ']' 00:22:29.925 09:50:52 -- nvmf/common.sh@479 -- # killprocess 70618 00:22:29.925 09:50:52 -- common/autotest_common.sh@936 -- # '[' -z 70618 ']' 00:22:29.925 09:50:52 -- common/autotest_common.sh@940 -- # kill -0 70618 00:22:29.925 09:50:52 -- common/autotest_common.sh@941 -- # uname 00:22:29.925 09:50:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:29.925 09:50:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70618 00:22:29.925 09:50:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:29.925 killing process with pid 70618 00:22:29.925 09:50:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:29.925 09:50:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70618' 00:22:29.925 09:50:52 -- common/autotest_common.sh@955 -- # kill 70618 00:22:29.925 09:50:52 -- common/autotest_common.sh@960 -- # wait 70618 00:22:30.861 09:50:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:30.861 09:50:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:30.861 09:50:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:30.861 09:50:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.861 09:50:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.861 09:50:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.861 09:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.861 09:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.861 09:50:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:30.861 00:22:30.861 real 0m28.362s 00:22:30.861 user 0m46.972s 00:22:30.861 sys 0m6.829s 00:22:30.861 09:50:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:30.861 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:22:30.861 ************************************ 00:22:30.861 END TEST nvmf_zcopy 00:22:30.861 ************************************ 00:22:30.861 09:50:54 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:30.861 09:50:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:30.861 09:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:30.861 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:22:30.861 ************************************ 00:22:30.861 START TEST nvmf_nmic 00:22:30.861 ************************************ 00:22:30.861 09:50:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:30.861 * Looking for test storage... 00:22:30.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:30.861 09:50:54 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.861 09:50:54 -- nvmf/common.sh@7 -- # uname -s 00:22:30.861 09:50:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.861 09:50:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.861 09:50:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.120 09:50:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.120 09:50:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.120 09:50:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.120 09:50:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.120 09:50:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.120 09:50:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.120 09:50:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.120 09:50:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:22:31.120 09:50:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:22:31.120 09:50:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.120 09:50:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.120 09:50:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.120 09:50:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.120 09:50:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.120 09:50:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.120 09:50:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.120 09:50:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.120 09:50:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.120 09:50:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.120 09:50:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.120 09:50:54 -- paths/export.sh@5 -- # export PATH 00:22:31.120 09:50:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.120 09:50:54 -- nvmf/common.sh@47 -- # : 0 00:22:31.120 09:50:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.120 09:50:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.120 09:50:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.120 09:50:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.120 09:50:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.120 09:50:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.120 09:50:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.120 09:50:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.120 09:50:54 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.120 09:50:54 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.120 09:50:54 -- target/nmic.sh@14 -- # nvmftestinit 00:22:31.120 09:50:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:31.120 09:50:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.120 09:50:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:31.120 09:50:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:31.120 09:50:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:31.120 09:50:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.120 09:50:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.121 09:50:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.121 09:50:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:31.121 09:50:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:31.121 09:50:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:31.121 09:50:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:31.121 09:50:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:31.121 09:50:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:31.121 09:50:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.121 09:50:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.121 09:50:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:31.121 09:50:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:31.121 09:50:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.121 09:50:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.121 09:50:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.121 09:50:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.121 09:50:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.121 09:50:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.121 09:50:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.121 09:50:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.121 09:50:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:31.121 09:50:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:31.121 Cannot find device "nvmf_tgt_br" 00:22:31.121 09:50:54 -- nvmf/common.sh@155 -- # true 00:22:31.121 09:50:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.121 Cannot find device "nvmf_tgt_br2" 00:22:31.121 09:50:54 -- nvmf/common.sh@156 -- # true 00:22:31.121 09:50:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:31.121 09:50:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:31.121 Cannot find device "nvmf_tgt_br" 00:22:31.121 09:50:54 -- nvmf/common.sh@158 -- # true 00:22:31.121 09:50:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:31.121 Cannot find device "nvmf_tgt_br2" 00:22:31.121 09:50:54 -- nvmf/common.sh@159 -- # true 00:22:31.121 09:50:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:31.121 09:50:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:31.121 09:50:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.121 09:50:54 -- nvmf/common.sh@162 -- # true 00:22:31.121 09:50:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.121 09:50:54 -- nvmf/common.sh@163 -- # true 00:22:31.121 09:50:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:31.121 09:50:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:31.121 09:50:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:31.121 09:50:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:31.121 09:50:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:31.121 09:50:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:31.121 09:50:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:31.121 09:50:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:31.121 09:50:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:31.121 09:50:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:31.121 09:50:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:31.121 09:50:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:31.121 09:50:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:31.121 09:50:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:31.380 09:50:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:31.380 09:50:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:31.380 09:50:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:31.380 09:50:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:31.380 09:50:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:31.380 09:50:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:31.380 09:50:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:31.380 09:50:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:31.380 09:50:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:31.380 09:50:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:31.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:22:31.380 00:22:31.380 --- 10.0.0.2 ping statistics --- 00:22:31.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.380 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:31.380 09:50:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:31.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:31.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:22:31.380 00:22:31.380 --- 10.0.0.3 ping statistics --- 00:22:31.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.380 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:31.380 09:50:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:31.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:31.380 00:22:31.380 --- 10.0.0.1 ping statistics --- 00:22:31.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.380 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:31.380 09:50:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.380 09:50:54 -- nvmf/common.sh@422 -- # return 0 00:22:31.380 09:50:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:31.380 09:50:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.380 09:50:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:31.380 09:50:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:31.380 09:50:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.380 09:50:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:31.380 09:50:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:31.380 09:50:54 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:22:31.380 09:50:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:31.380 09:50:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:31.380 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:22:31.380 09:50:54 -- nvmf/common.sh@470 -- # nvmfpid=71135 00:22:31.380 09:50:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.380 09:50:54 -- nvmf/common.sh@471 -- # waitforlisten 71135 00:22:31.380 09:50:54 -- common/autotest_common.sh@817 -- # '[' -z 71135 ']' 00:22:31.380 09:50:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.380 09:50:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:31.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.380 09:50:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.380 09:50:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:31.380 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:22:31.380 [2024-04-18 09:50:54.709030] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:31.380 [2024-04-18 09:50:54.709206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.639 [2024-04-18 09:50:54.877659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.954 [2024-04-18 09:50:55.171008] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.954 [2024-04-18 09:50:55.171076] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.954 [2024-04-18 09:50:55.171097] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.954 [2024-04-18 09:50:55.171110] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.954 [2024-04-18 09:50:55.171136] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.954 [2024-04-18 09:50:55.171277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.954 [2024-04-18 09:50:55.171538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.954 [2024-04-18 09:50:55.172031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.954 [2024-04-18 09:50:55.172033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.521 09:50:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:32.521 09:50:55 -- common/autotest_common.sh@850 -- # return 0 00:22:32.521 09:50:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:32.521 09:50:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 09:50:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.521 09:50:55 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 [2024-04-18 09:50:55.654942] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 Malloc0 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 [2024-04-18 09:50:55.780899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 test case1: single bdev can't be used in multiple subsystems 00:22:32.521 09:50:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:22:32.521 09:50:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@28 -- # nmic_status=0 00:22:32.521 09:50:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 [2024-04-18 09:50:55.804705] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:22:32.521 [2024-04-18 09:50:55.804764] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:22:32.521 [2024-04-18 09:50:55.804786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:32.521 request: 00:22:32.521 { 00:22:32.521 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.521 "namespace": { 00:22:32.521 "bdev_name": "Malloc0", 00:22:32.521 "no_auto_visible": false 00:22:32.521 }, 00:22:32.521 "method": "nvmf_subsystem_add_ns", 00:22:32.521 "req_id": 1 00:22:32.521 } 00:22:32.521 Got JSON-RPC error response 00:22:32.521 response: 00:22:32.521 { 00:22:32.521 "code": -32602, 00:22:32.521 "message": "Invalid parameters" 00:22:32.521 } 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@29 -- # nmic_status=1 00:22:32.521 09:50:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:22:32.521 Adding namespace failed - expected result. 00:22:32.521 09:50:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:22:32.521 test case2: host connect to nvmf target in multiple paths 00:22:32.521 09:50:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:22:32.521 09:50:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:32.521 09:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.521 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 [2024-04-18 09:50:55.816892] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:32.521 09:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.521 09:50:55 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:32.780 09:50:55 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:22:32.780 09:50:56 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:22:32.780 09:50:56 -- common/autotest_common.sh@1184 -- # local i=0 00:22:32.780 09:50:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:32.780 09:50:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:32.780 09:50:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:35.311 09:50:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:35.311 09:50:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:35.311 09:50:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:35.311 09:50:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:35.311 09:50:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:35.311 09:50:58 -- common/autotest_common.sh@1194 -- # return 0 00:22:35.311 09:50:58 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:35.311 [global] 00:22:35.311 thread=1 00:22:35.311 invalidate=1 00:22:35.311 rw=write 00:22:35.311 time_based=1 00:22:35.311 runtime=1 00:22:35.311 ioengine=libaio 00:22:35.311 direct=1 00:22:35.311 bs=4096 00:22:35.311 iodepth=1 00:22:35.311 norandommap=0 00:22:35.311 numjobs=1 00:22:35.311 00:22:35.311 verify_dump=1 00:22:35.311 verify_backlog=512 00:22:35.311 verify_state_save=0 00:22:35.311 do_verify=1 00:22:35.311 verify=crc32c-intel 00:22:35.311 [job0] 00:22:35.311 filename=/dev/nvme0n1 00:22:35.311 Could not set queue depth (nvme0n1) 00:22:35.311 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.311 fio-3.35 00:22:35.311 Starting 1 thread 00:22:36.246 00:22:36.246 job0: (groupid=0, jobs=1): err= 0: pid=71221: Thu Apr 18 09:50:59 2024 00:22:36.246 read: IOPS=2251, BW=9007KiB/s (9223kB/s)(9016KiB/1001msec) 00:22:36.246 slat (nsec): min=12235, max=50105, avg=14789.79, stdev=3215.10 00:22:36.246 clat (usec): min=193, max=900, avg=231.29, stdev=28.03 00:22:36.246 lat (usec): min=208, max=936, avg=246.08, stdev=28.63 00:22:36.246 clat percentiles (usec): 00:22:36.246 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:22:36.246 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:22:36.246 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 255], 00:22:36.246 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 676], 99.95th=[ 873], 00:22:36.246 | 99.99th=[ 898] 00:22:36.246 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:36.246 slat (usec): min=17, max=117, avg=21.90, stdev= 5.31 00:22:36.246 clat (usec): min=116, max=5765, avg=148.93, stdev=135.68 00:22:36.246 lat (usec): min=141, max=5785, avg=170.84, stdev=136.29 00:22:36.246 clat percentiles (usec): 00:22:36.246 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 135], 00:22:36.246 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:22:36.246 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 163], 00:22:36.246 | 99.00th=[ 184], 99.50th=[ 196], 99.90th=[ 2442], 99.95th=[ 2966], 00:22:36.246 | 99.99th=[ 5735] 00:22:36.246 bw ( KiB/s): min=11808, max=11808, per=100.00%, avg=11808.00, stdev= 0.00, samples=1 00:22:36.246 iops : min= 2952, max= 2952, avg=2952.00, stdev= 0.00, samples=1 00:22:36.246 lat (usec) : 250=96.24%, 500=3.53%, 750=0.10%, 1000=0.04% 00:22:36.246 lat (msec) : 2=0.02%, 4=0.04%, 10=0.02% 00:22:36.246 cpu : usr=2.10%, sys=6.40%, ctx=4814, majf=0, minf=2 00:22:36.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:36.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:36.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:36.246 issued rwts: total=2254,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:36.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:36.246 00:22:36.246 Run status group 0 (all jobs): 00:22:36.246 READ: bw=9007KiB/s (9223kB/s), 9007KiB/s-9007KiB/s (9223kB/s-9223kB/s), io=9016KiB (9232kB), run=1001-1001msec 00:22:36.246 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:22:36.246 00:22:36.246 Disk stats (read/write): 00:22:36.246 nvme0n1: ios=2098/2252, merge=0/0, ticks=511/342, in_queue=853, util=91.58% 00:22:36.246 09:50:59 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:36.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:22:36.246 09:50:59 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:36.246 09:50:59 -- common/autotest_common.sh@1205 -- # local i=0 00:22:36.246 09:50:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:36.246 09:50:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:36.246 09:50:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:36.246 09:50:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:36.246 09:50:59 -- common/autotest_common.sh@1217 -- # return 0 00:22:36.246 09:50:59 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:36.246 09:50:59 -- target/nmic.sh@53 -- # nvmftestfini 00:22:36.246 09:50:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:36.246 09:50:59 -- nvmf/common.sh@117 -- # sync 00:22:36.246 09:50:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.246 09:50:59 -- nvmf/common.sh@120 -- # set +e 00:22:36.246 09:50:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.246 09:50:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.246 rmmod nvme_tcp 00:22:36.246 rmmod nvme_fabrics 00:22:36.246 rmmod nvme_keyring 00:22:36.246 09:50:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.246 09:50:59 -- nvmf/common.sh@124 -- # set -e 00:22:36.246 09:50:59 -- nvmf/common.sh@125 -- # return 0 00:22:36.246 09:50:59 -- nvmf/common.sh@478 -- # '[' -n 71135 ']' 00:22:36.246 09:50:59 -- nvmf/common.sh@479 -- # killprocess 71135 00:22:36.246 09:50:59 -- common/autotest_common.sh@936 -- # '[' -z 71135 ']' 00:22:36.246 09:50:59 -- common/autotest_common.sh@940 -- # kill -0 71135 00:22:36.246 09:50:59 -- common/autotest_common.sh@941 -- # uname 00:22:36.246 09:50:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.246 09:50:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71135 00:22:36.246 09:50:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.246 09:50:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.246 killing process with pid 71135 00:22:36.246 09:50:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71135' 00:22:36.247 09:50:59 -- common/autotest_common.sh@955 -- # kill 71135 00:22:36.247 09:50:59 -- common/autotest_common.sh@960 -- # wait 71135 00:22:37.630 09:51:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:37.630 09:51:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:37.630 09:51:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:37.630 09:51:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.630 09:51:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.630 09:51:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.630 09:51:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.630 09:51:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.630 09:51:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:37.630 00:22:37.630 real 0m6.808s 00:22:37.630 user 0m20.632s 00:22:37.630 sys 0m2.213s 00:22:37.630 09:51:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:37.630 09:51:00 -- common/autotest_common.sh@10 -- # set +x 00:22:37.630 ************************************ 00:22:37.630 END TEST nvmf_nmic 00:22:37.630 ************************************ 00:22:37.630 09:51:00 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:37.630 09:51:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:37.630 09:51:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.630 09:51:00 -- common/autotest_common.sh@10 -- # set +x 00:22:37.889 ************************************ 00:22:37.889 START TEST nvmf_fio_target 00:22:37.889 ************************************ 00:22:37.889 09:51:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:37.889 * Looking for test storage... 00:22:37.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:37.889 09:51:01 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.889 09:51:01 -- nvmf/common.sh@7 -- # uname -s 00:22:37.889 09:51:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.889 09:51:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.889 09:51:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.889 09:51:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.889 09:51:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.889 09:51:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.889 09:51:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.889 09:51:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.889 09:51:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.889 09:51:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.889 09:51:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:22:37.889 09:51:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:22:37.889 09:51:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.889 09:51:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.889 09:51:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.889 09:51:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.889 09:51:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.889 09:51:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.889 09:51:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.889 09:51:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.889 09:51:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.889 09:51:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.889 09:51:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.889 09:51:01 -- paths/export.sh@5 -- # export PATH 00:22:37.889 09:51:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.889 09:51:01 -- nvmf/common.sh@47 -- # : 0 00:22:37.889 09:51:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.889 09:51:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.889 09:51:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.889 09:51:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.889 09:51:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.889 09:51:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.889 09:51:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.889 09:51:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.889 09:51:01 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.889 09:51:01 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.889 09:51:01 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:37.889 09:51:01 -- target/fio.sh@16 -- # nvmftestinit 00:22:37.889 09:51:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:37.889 09:51:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.889 09:51:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:37.889 09:51:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:37.889 09:51:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:37.889 09:51:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.889 09:51:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.889 09:51:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.889 09:51:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:37.889 09:51:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:37.889 09:51:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:37.889 09:51:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:37.889 09:51:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:37.889 09:51:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:37.889 09:51:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.889 09:51:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.889 09:51:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:37.889 09:51:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:37.889 09:51:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.889 09:51:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.889 09:51:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.889 09:51:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.889 09:51:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.889 09:51:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.889 09:51:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.889 09:51:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.889 09:51:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:37.889 09:51:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:37.889 Cannot find device "nvmf_tgt_br" 00:22:37.889 09:51:01 -- nvmf/common.sh@155 -- # true 00:22:37.889 09:51:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.889 Cannot find device "nvmf_tgt_br2" 00:22:37.889 09:51:01 -- nvmf/common.sh@156 -- # true 00:22:37.889 09:51:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:37.889 09:51:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:37.889 Cannot find device "nvmf_tgt_br" 00:22:37.889 09:51:01 -- nvmf/common.sh@158 -- # true 00:22:37.889 09:51:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:37.889 Cannot find device "nvmf_tgt_br2" 00:22:37.889 09:51:01 -- nvmf/common.sh@159 -- # true 00:22:37.889 09:51:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:38.148 09:51:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:38.148 09:51:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:38.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.148 09:51:01 -- nvmf/common.sh@162 -- # true 00:22:38.148 09:51:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:38.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.148 09:51:01 -- nvmf/common.sh@163 -- # true 00:22:38.148 09:51:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:38.148 09:51:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:38.148 09:51:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:38.148 09:51:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:38.148 09:51:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:38.148 09:51:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:38.148 09:51:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:38.148 09:51:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:38.148 09:51:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:38.148 09:51:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:38.148 09:51:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:38.148 09:51:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:38.148 09:51:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:38.148 09:51:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:38.148 09:51:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:38.148 09:51:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:38.148 09:51:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:38.148 09:51:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:38.148 09:51:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:38.148 09:51:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:38.148 09:51:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:38.148 09:51:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:38.148 09:51:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:38.148 09:51:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:38.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:22:38.148 00:22:38.148 --- 10.0.0.2 ping statistics --- 00:22:38.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.148 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:38.148 09:51:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:38.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:38.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:38.148 00:22:38.148 --- 10.0.0.3 ping statistics --- 00:22:38.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.148 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:38.148 09:51:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:38.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:38.148 00:22:38.148 --- 10.0.0.1 ping statistics --- 00:22:38.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.148 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:38.148 09:51:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.148 09:51:01 -- nvmf/common.sh@422 -- # return 0 00:22:38.148 09:51:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:38.148 09:51:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.148 09:51:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:38.148 09:51:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:38.148 09:51:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.148 09:51:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:38.148 09:51:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:38.148 09:51:01 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:22:38.148 09:51:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:38.148 09:51:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:38.148 09:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:38.407 09:51:01 -- nvmf/common.sh@470 -- # nvmfpid=71421 00:22:38.407 09:51:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.407 09:51:01 -- nvmf/common.sh@471 -- # waitforlisten 71421 00:22:38.407 09:51:01 -- common/autotest_common.sh@817 -- # '[' -z 71421 ']' 00:22:38.407 09:51:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.407 09:51:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:38.407 09:51:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.407 09:51:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:38.407 09:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:38.407 [2024-04-18 09:51:01.632029] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:38.407 [2024-04-18 09:51:01.632195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.666 [2024-04-18 09:51:01.801837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.925 [2024-04-18 09:51:02.084973] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.925 [2024-04-18 09:51:02.085087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.925 [2024-04-18 09:51:02.085152] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.925 [2024-04-18 09:51:02.085181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.925 [2024-04-18 09:51:02.085216] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.925 [2024-04-18 09:51:02.085504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.925 [2024-04-18 09:51:02.085633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.925 [2024-04-18 09:51:02.086370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.925 [2024-04-18 09:51:02.086401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.183 09:51:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:39.183 09:51:02 -- common/autotest_common.sh@850 -- # return 0 00:22:39.183 09:51:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:39.183 09:51:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:39.183 09:51:02 -- common/autotest_common.sh@10 -- # set +x 00:22:39.441 09:51:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.441 09:51:02 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:39.699 [2024-04-18 09:51:02.841874] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.699 09:51:02 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:39.956 09:51:03 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:22:39.956 09:51:03 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:40.214 09:51:03 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:22:40.214 09:51:03 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:40.471 09:51:03 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:22:40.471 09:51:03 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:41.037 09:51:04 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:22:41.037 09:51:04 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:22:41.296 09:51:04 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:41.554 09:51:04 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:22:41.554 09:51:04 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:41.812 09:51:05 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:22:42.070 09:51:05 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:42.328 09:51:05 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:22:42.328 09:51:05 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:22:42.586 09:51:05 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:42.845 09:51:06 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:42.845 09:51:06 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.103 09:51:06 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:43.103 09:51:06 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:43.361 09:51:06 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.619 [2024-04-18 09:51:06.937312] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.619 09:51:06 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:22:43.877 09:51:07 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:22:44.136 09:51:07 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:44.394 09:51:07 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:22:44.394 09:51:07 -- common/autotest_common.sh@1184 -- # local i=0 00:22:44.394 09:51:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:44.394 09:51:07 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:22:44.394 09:51:07 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:22:44.394 09:51:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:46.292 09:51:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:46.292 09:51:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:46.292 09:51:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:46.292 09:51:09 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:22:46.292 09:51:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:46.292 09:51:09 -- common/autotest_common.sh@1194 -- # return 0 00:22:46.292 09:51:09 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:46.292 [global] 00:22:46.292 thread=1 00:22:46.292 invalidate=1 00:22:46.292 rw=write 00:22:46.292 time_based=1 00:22:46.292 runtime=1 00:22:46.292 ioengine=libaio 00:22:46.292 direct=1 00:22:46.292 bs=4096 00:22:46.292 iodepth=1 00:22:46.292 norandommap=0 00:22:46.292 numjobs=1 00:22:46.292 00:22:46.292 verify_dump=1 00:22:46.292 verify_backlog=512 00:22:46.292 verify_state_save=0 00:22:46.292 do_verify=1 00:22:46.292 verify=crc32c-intel 00:22:46.292 [job0] 00:22:46.292 filename=/dev/nvme0n1 00:22:46.292 [job1] 00:22:46.292 filename=/dev/nvme0n2 00:22:46.292 [job2] 00:22:46.292 filename=/dev/nvme0n3 00:22:46.292 [job3] 00:22:46.292 filename=/dev/nvme0n4 00:22:46.292 Could not set queue depth (nvme0n1) 00:22:46.292 Could not set queue depth (nvme0n2) 00:22:46.292 Could not set queue depth (nvme0n3) 00:22:46.292 Could not set queue depth (nvme0n4) 00:22:46.550 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:46.550 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:46.550 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:46.550 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:46.550 fio-3.35 00:22:46.550 Starting 4 threads 00:22:47.924 00:22:47.924 job0: (groupid=0, jobs=1): err= 0: pid=71612: Thu Apr 18 09:51:10 2024 00:22:47.924 read: IOPS=1253, BW=5015KiB/s (5135kB/s)(5020KiB/1001msec) 00:22:47.924 slat (nsec): min=15789, max=70950, avg=27284.69, stdev=9875.27 00:22:47.924 clat (usec): min=243, max=904, avg=384.48, stdev=68.86 00:22:47.924 lat (usec): min=264, max=957, avg=411.76, stdev=75.60 00:22:47.924 clat percentiles (usec): 00:22:47.924 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:22:47.924 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:22:47.924 | 70.00th=[ 379], 80.00th=[ 429], 90.00th=[ 486], 95.00th=[ 529], 00:22:47.924 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 725], 99.95th=[ 906], 00:22:47.924 | 99.99th=[ 906] 00:22:47.924 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:47.924 slat (usec): min=19, max=133, avg=36.25, stdev=10.32 00:22:47.924 clat (usec): min=123, max=890, avg=272.79, stdev=89.59 00:22:47.924 lat (usec): min=146, max=930, avg=309.04, stdev=96.27 00:22:47.924 clat percentiles (usec): 00:22:47.924 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 182], 00:22:47.924 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:22:47.924 | 70.00th=[ 281], 80.00th=[ 338], 90.00th=[ 416], 95.00th=[ 437], 00:22:47.924 | 99.00th=[ 474], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 889], 00:22:47.924 | 99.99th=[ 889] 00:22:47.924 bw ( KiB/s): min= 6656, max= 6656, per=20.33%, avg=6656.00, stdev= 0.00, samples=1 00:22:47.924 iops : min= 1664, max= 1664, avg=1664.00, stdev= 0.00, samples=1 00:22:47.924 lat (usec) : 250=20.32%, 500=75.78%, 750=3.83%, 1000=0.07% 00:22:47.924 cpu : usr=2.00%, sys=6.90%, ctx=2791, majf=0, minf=15 00:22:47.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.924 issued rwts: total=1255,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:47.924 job1: (groupid=0, jobs=1): err= 0: pid=71613: Thu Apr 18 09:51:10 2024 00:22:47.924 read: IOPS=2405, BW=9622KiB/s (9853kB/s)(9632KiB/1001msec) 00:22:47.924 slat (nsec): min=12093, max=97849, avg=16764.54, stdev=4893.26 00:22:47.924 clat (usec): min=177, max=567, avg=213.57, stdev=25.65 00:22:47.924 lat (usec): min=192, max=597, avg=230.33, stdev=26.43 00:22:47.924 clat percentiles (usec): 00:22:47.924 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:22:47.924 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:22:47.924 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 258], 00:22:47.924 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 400], 99.95th=[ 437], 00:22:47.924 | 99.99th=[ 570] 00:22:47.924 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:47.924 slat (usec): min=16, max=139, avg=23.69, stdev= 7.61 00:22:47.924 clat (usec): min=119, max=376, avg=146.54, stdev=19.95 00:22:47.924 lat (usec): min=139, max=412, avg=170.23, stdev=22.65 00:22:47.924 clat percentiles (usec): 00:22:47.924 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:22:47.924 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:22:47.924 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:22:47.924 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 330], 99.95th=[ 355], 00:22:47.924 | 99.99th=[ 379] 00:22:47.924 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:22:47.924 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:47.924 lat (usec) : 250=95.61%, 500=4.37%, 750=0.02% 00:22:47.924 cpu : usr=2.20%, sys=7.60%, ctx=4969, majf=0, minf=9 00:22:47.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.924 issued rwts: total=2408,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:47.924 job2: (groupid=0, jobs=1): err= 0: pid=71614: Thu Apr 18 09:51:10 2024 00:22:47.924 read: IOPS=1357, BW=5431KiB/s (5561kB/s)(5436KiB/1001msec) 00:22:47.924 slat (nsec): min=13413, max=84876, avg=28423.32, stdev=9333.97 00:22:47.924 clat (usec): min=229, max=1059, avg=394.83, stdev=101.17 00:22:47.924 lat (usec): min=245, max=1108, avg=423.25, stdev=105.60 00:22:47.924 clat percentiles (usec): 00:22:47.924 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:22:47.924 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:22:47.924 | 70.00th=[ 379], 80.00th=[ 441], 90.00th=[ 578], 95.00th=[ 611], 00:22:47.924 | 99.00th=[ 701], 99.50th=[ 873], 99.90th=[ 979], 99.95th=[ 1057], 00:22:47.924 | 99.99th=[ 1057] 00:22:47.924 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:47.924 slat (usec): min=19, max=130, avg=34.09, stdev= 8.92 00:22:47.924 clat (usec): min=141, max=2229, avg=236.93, stdev=73.18 00:22:47.924 lat (usec): min=165, max=2289, avg=271.01, stdev=75.01 00:22:47.924 clat percentiles (usec): 00:22:47.924 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 186], 00:22:47.924 | 30.00th=[ 215], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 251], 00:22:47.924 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:22:47.924 | 99.00th=[ 412], 99.50th=[ 478], 99.90th=[ 914], 99.95th=[ 2245], 00:22:47.924 | 99.99th=[ 2245] 00:22:47.924 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:22:47.924 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:47.924 lat (usec) : 250=31.05%, 500=61.45%, 750=6.98%, 1000=0.45% 00:22:47.924 lat (msec) : 2=0.03%, 4=0.03% 00:22:47.924 cpu : usr=2.00%, sys=7.10%, ctx=2895, majf=0, minf=9 00:22:47.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.924 issued rwts: total=1359,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:47.924 job3: (groupid=0, jobs=1): err= 0: pid=71615: Thu Apr 18 09:51:10 2024 00:22:47.924 read: IOPS=2209, BW=8839KiB/s (9051kB/s)(8848KiB/1001msec) 00:22:47.924 slat (nsec): min=12271, max=49309, avg=16065.27, stdev=4682.58 00:22:47.924 clat (usec): min=193, max=2064, avg=223.49, stdev=46.08 00:22:47.925 lat (usec): min=208, max=2109, avg=239.56, stdev=47.09 00:22:47.925 clat percentiles (usec): 00:22:47.925 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:22:47.925 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:22:47.925 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:22:47.925 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 717], 99.95th=[ 1004], 00:22:47.925 | 99.99th=[ 2073] 00:22:47.925 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:47.925 slat (usec): min=17, max=141, avg=23.11, stdev= 8.31 00:22:47.925 clat (usec): min=135, max=289, avg=157.13, stdev=11.94 00:22:47.925 lat (usec): min=154, max=431, avg=180.23, stdev=16.85 00:22:47.925 clat percentiles (usec): 00:22:47.925 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:22:47.925 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:22:47.925 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:22:47.925 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 215], 99.95th=[ 229], 00:22:47.925 | 99.99th=[ 289] 00:22:47.925 bw ( KiB/s): min=10824, max=10824, per=33.07%, avg=10824.00, stdev= 0.00, samples=1 00:22:47.925 iops : min= 2706, max= 2706, avg=2706.00, stdev= 0.00, samples=1 00:22:47.925 lat (usec) : 250=98.18%, 500=1.76%, 750=0.02% 00:22:47.925 lat (msec) : 2=0.02%, 4=0.02% 00:22:47.925 cpu : usr=2.00%, sys=7.10%, ctx=4779, majf=0, minf=8 00:22:47.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.925 issued rwts: total=2212,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:47.925 00:22:47.925 Run status group 0 (all jobs): 00:22:47.925 READ: bw=28.2MiB/s (29.6MB/s), 5015KiB/s-9622KiB/s (5135kB/s-9853kB/s), io=28.3MiB (29.6MB), run=1001-1001msec 00:22:47.925 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:22:47.925 00:22:47.925 Disk stats (read/write): 00:22:47.925 nvme0n1: ios=1074/1297, merge=0/0, ticks=436/396, in_queue=832, util=87.56% 00:22:47.925 nvme0n2: ios=2077/2140, merge=0/0, ticks=464/341, in_queue=805, util=87.67% 00:22:47.925 nvme0n3: ios=1031/1536, merge=0/0, ticks=407/394, in_queue=801, util=89.13% 00:22:47.925 nvme0n4: ios=1968/2048, merge=0/0, ticks=444/343, in_queue=787, util=89.59% 00:22:47.925 09:51:10 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:22:47.925 [global] 00:22:47.925 thread=1 00:22:47.925 invalidate=1 00:22:47.925 rw=randwrite 00:22:47.925 time_based=1 00:22:47.925 runtime=1 00:22:47.925 ioengine=libaio 00:22:47.925 direct=1 00:22:47.925 bs=4096 00:22:47.925 iodepth=1 00:22:47.925 norandommap=0 00:22:47.925 numjobs=1 00:22:47.925 00:22:47.925 verify_dump=1 00:22:47.925 verify_backlog=512 00:22:47.925 verify_state_save=0 00:22:47.925 do_verify=1 00:22:47.925 verify=crc32c-intel 00:22:47.925 [job0] 00:22:47.925 filename=/dev/nvme0n1 00:22:47.925 [job1] 00:22:47.925 filename=/dev/nvme0n2 00:22:47.925 [job2] 00:22:47.925 filename=/dev/nvme0n3 00:22:47.925 [job3] 00:22:47.925 filename=/dev/nvme0n4 00:22:47.925 Could not set queue depth (nvme0n1) 00:22:47.925 Could not set queue depth (nvme0n2) 00:22:47.925 Could not set queue depth (nvme0n3) 00:22:47.925 Could not set queue depth (nvme0n4) 00:22:47.925 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:47.925 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:47.925 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:47.925 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:47.925 fio-3.35 00:22:47.925 Starting 4 threads 00:22:49.311 00:22:49.311 job0: (groupid=0, jobs=1): err= 0: pid=71672: Thu Apr 18 09:51:12 2024 00:22:49.311 read: IOPS=2105, BW=8424KiB/s (8626kB/s)(8432KiB/1001msec) 00:22:49.311 slat (nsec): min=12477, max=57022, avg=18322.83, stdev=6904.68 00:22:49.311 clat (usec): min=182, max=1708, avg=232.68, stdev=59.21 00:22:49.311 lat (usec): min=196, max=1749, avg=251.00, stdev=62.89 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:22:49.311 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 225], 00:22:49.311 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 314], 95.00th=[ 343], 00:22:49.311 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 453], 99.95th=[ 1012], 00:22:49.311 | 99.99th=[ 1713] 00:22:49.311 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:49.311 slat (usec): min=16, max=121, avg=22.53, stdev= 6.89 00:22:49.311 clat (usec): min=119, max=1637, avg=157.95, stdev=45.09 00:22:49.311 lat (usec): min=137, max=1656, avg=180.48, stdev=48.01 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:22:49.311 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 153], 00:22:49.311 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 200], 95.00th=[ 233], 00:22:49.311 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 408], 99.95th=[ 486], 00:22:49.311 | 99.99th=[ 1631] 00:22:49.311 bw ( KiB/s): min=12288, max=12288, per=36.78%, avg=12288.00, stdev= 0.00, samples=1 00:22:49.311 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:49.311 lat (usec) : 250=88.00%, 500=11.93% 00:22:49.311 lat (msec) : 2=0.06% 00:22:49.311 cpu : usr=1.50%, sys=7.90%, ctx=4669, majf=0, minf=18 00:22:49.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=2108,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:49.311 job1: (groupid=0, jobs=1): err= 0: pid=71674: Thu Apr 18 09:51:12 2024 00:22:49.311 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:22:49.311 slat (nsec): min=9405, max=61944, avg=14056.33, stdev=3776.58 00:22:49.311 clat (usec): min=234, max=1930, avg=348.15, stdev=60.16 00:22:49.311 lat (usec): min=250, max=1944, avg=362.21, stdev=61.08 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 318], 00:22:49.311 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:22:49.311 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 424], 95.00th=[ 449], 00:22:49.311 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 627], 99.95th=[ 1926], 00:22:49.311 | 99.99th=[ 1926] 00:22:49.311 write: IOPS=1553, BW=6214KiB/s (6363kB/s)(6220KiB/1001msec); 0 zone resets 00:22:49.311 slat (usec): min=10, max=1612, avg=22.63, stdev=41.15 00:22:49.311 clat (usec): min=6, max=6024, avg=259.27, stdev=168.09 00:22:49.311 lat (usec): min=175, max=6087, avg=281.90, stdev=173.22 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 188], 20.00th=[ 225], 00:22:49.311 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 265], 00:22:49.311 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:22:49.311 | 99.00th=[ 388], 99.50th=[ 881], 99.90th=[ 1631], 99.95th=[ 5997], 00:22:49.311 | 99.99th=[ 5997] 00:22:49.311 bw ( KiB/s): min= 8192, max= 8192, per=24.52%, avg=8192.00, stdev= 0.00, samples=1 00:22:49.311 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:49.311 lat (usec) : 10=0.03%, 250=21.38%, 500=77.68%, 750=0.58%, 1000=0.06% 00:22:49.311 lat (msec) : 2=0.23%, 10=0.03% 00:22:49.311 cpu : usr=1.00%, sys=4.80%, ctx=3097, majf=0, minf=17 00:22:49.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1536,1555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:49.311 job2: (groupid=0, jobs=1): err= 0: pid=71675: Thu Apr 18 09:51:12 2024 00:22:49.311 read: IOPS=2253, BW=9015KiB/s (9231kB/s)(9024KiB/1001msec) 00:22:49.311 slat (nsec): min=12460, max=41045, avg=14796.58, stdev=3011.10 00:22:49.311 clat (usec): min=192, max=1914, avg=219.40, stdev=46.81 00:22:49.311 lat (usec): min=206, max=1927, avg=234.20, stdev=47.13 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 206], 00:22:49.311 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:22:49.311 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 243], 00:22:49.311 | 99.00th=[ 265], 99.50th=[ 375], 99.90th=[ 816], 99.95th=[ 930], 00:22:49.311 | 99.99th=[ 1909] 00:22:49.311 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:49.311 slat (nsec): min=15650, max=98516, avg=21675.54, stdev=5295.69 00:22:49.311 clat (usec): min=130, max=606, avg=159.37, stdev=34.30 00:22:49.311 lat (usec): min=151, max=705, avg=181.04, stdev=38.01 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:22:49.311 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:22:49.311 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 190], 00:22:49.311 | 99.00th=[ 297], 99.50th=[ 445], 99.90th=[ 515], 99.95th=[ 537], 00:22:49.311 | 99.99th=[ 611] 00:22:49.311 bw ( KiB/s): min=10968, max=10968, per=32.83%, avg=10968.00, stdev= 0.00, samples=1 00:22:49.311 iops : min= 2742, max= 2742, avg=2742.00, stdev= 0.00, samples=1 00:22:49.311 lat (usec) : 250=97.59%, 500=2.20%, 750=0.12%, 1000=0.06% 00:22:49.311 lat (msec) : 2=0.02% 00:22:49.311 cpu : usr=1.70%, sys=7.00%, ctx=4816, majf=0, minf=7 00:22:49.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=2256,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:49.311 job3: (groupid=0, jobs=1): err= 0: pid=71676: Thu Apr 18 09:51:12 2024 00:22:49.311 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:22:49.311 slat (nsec): min=12068, max=75978, avg=16941.37, stdev=3746.92 00:22:49.311 clat (usec): min=234, max=2029, avg=342.34, stdev=58.72 00:22:49.311 lat (usec): min=262, max=2062, avg=359.28, stdev=58.98 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:22:49.311 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:22:49.311 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 408], 95.00th=[ 433], 00:22:49.311 | 99.00th=[ 486], 99.50th=[ 529], 99.90th=[ 553], 99.95th=[ 2024], 00:22:49.311 | 99.99th=[ 2024] 00:22:49.311 write: IOPS=1683, BW=6733KiB/s (6895kB/s)(6740KiB/1001msec); 0 zone resets 00:22:49.311 slat (usec): min=11, max=126, avg=24.32, stdev= 8.41 00:22:49.311 clat (usec): min=51, max=1099, avg=237.83, stdev=48.28 00:22:49.311 lat (usec): min=156, max=1118, avg=262.15, stdev=49.37 00:22:49.311 clat percentiles (usec): 00:22:49.311 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 167], 20.00th=[ 198], 00:22:49.311 | 30.00th=[ 225], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 258], 00:22:49.311 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:22:49.311 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 400], 99.95th=[ 1106], 00:22:49.311 | 99.99th=[ 1106] 00:22:49.311 bw ( KiB/s): min= 8192, max= 8192, per=24.52%, avg=8192.00, stdev= 0.00, samples=1 00:22:49.311 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:49.311 lat (usec) : 100=0.03%, 250=27.35%, 500=72.15%, 750=0.40% 00:22:49.311 lat (msec) : 2=0.03%, 4=0.03% 00:22:49.311 cpu : usr=1.60%, sys=5.50%, ctx=3228, majf=0, minf=5 00:22:49.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1536,1685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:49.311 00:22:49.311 Run status group 0 (all jobs): 00:22:49.311 READ: bw=29.0MiB/s (30.4MB/s), 6138KiB/s-9015KiB/s (6285kB/s-9231kB/s), io=29.0MiB (30.5MB), run=1001-1001msec 00:22:49.311 WRITE: bw=32.6MiB/s (34.2MB/s), 6214KiB/s-9.99MiB/s (6363kB/s-10.5MB/s), io=32.7MiB (34.2MB), run=1001-1001msec 00:22:49.311 00:22:49.311 Disk stats (read/write): 00:22:49.311 nvme0n1: ios=1998/2048, merge=0/0, ticks=485/346, in_queue=831, util=88.57% 00:22:49.311 nvme0n2: ios=1226/1536, merge=0/0, ticks=451/370, in_queue=821, util=89.19% 00:22:49.311 nvme0n3: ios=2069/2065, merge=0/0, ticks=511/342, in_queue=853, util=89.79% 00:22:49.311 nvme0n4: ios=1255/1536, merge=0/0, ticks=434/356, in_queue=790, util=89.80% 00:22:49.311 09:51:12 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:22:49.311 [global] 00:22:49.311 thread=1 00:22:49.311 invalidate=1 00:22:49.311 rw=write 00:22:49.312 time_based=1 00:22:49.312 runtime=1 00:22:49.312 ioengine=libaio 00:22:49.312 direct=1 00:22:49.312 bs=4096 00:22:49.312 iodepth=128 00:22:49.312 norandommap=0 00:22:49.312 numjobs=1 00:22:49.312 00:22:49.312 verify_dump=1 00:22:49.312 verify_backlog=512 00:22:49.312 verify_state_save=0 00:22:49.312 do_verify=1 00:22:49.312 verify=crc32c-intel 00:22:49.312 [job0] 00:22:49.312 filename=/dev/nvme0n1 00:22:49.312 [job1] 00:22:49.312 filename=/dev/nvme0n2 00:22:49.312 [job2] 00:22:49.312 filename=/dev/nvme0n3 00:22:49.312 [job3] 00:22:49.312 filename=/dev/nvme0n4 00:22:49.312 Could not set queue depth (nvme0n1) 00:22:49.312 Could not set queue depth (nvme0n2) 00:22:49.312 Could not set queue depth (nvme0n3) 00:22:49.312 Could not set queue depth (nvme0n4) 00:22:49.312 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:49.312 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:49.312 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:49.312 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:49.312 fio-3.35 00:22:49.312 Starting 4 threads 00:22:50.310 00:22:50.310 job0: (groupid=0, jobs=1): err= 0: pid=71730: Thu Apr 18 09:51:13 2024 00:22:50.310 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:22:50.310 slat (usec): min=5, max=7051, avg=134.41, stdev=699.31 00:22:50.310 clat (usec): min=9750, max=32823, avg=18055.30, stdev=5720.39 00:22:50.310 lat (usec): min=12370, max=32850, avg=18189.72, stdev=5717.18 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[11076], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:22:50.311 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[19530], 00:22:50.311 | 70.00th=[21103], 80.00th=[23987], 90.00th=[27919], 95.00th=[28967], 00:22:50.311 | 99.00th=[31065], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:22:50.311 | 99.99th=[32900] 00:22:50.311 write: IOPS=3741, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec); 0 zone resets 00:22:50.311 slat (usec): min=10, max=8080, avg=129.92, stdev=628.48 00:22:50.311 clat (usec): min=497, max=29789, avg=16472.68, stdev=4966.11 00:22:50.311 lat (usec): min=4679, max=31098, avg=16602.60, stdev=4966.62 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[ 9765], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:22:50.311 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:22:50.311 | 70.00th=[21627], 80.00th=[22938], 90.00th=[23462], 95.00th=[23987], 00:22:50.311 | 99.00th=[26870], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:22:50.311 | 99.99th=[29754] 00:22:50.311 bw ( KiB/s): min= 9224, max=19759, per=31.43%, avg=14491.50, stdev=7449.37, samples=2 00:22:50.311 iops : min= 2306, max= 4939, avg=3622.50, stdev=1861.81, samples=2 00:22:50.311 lat (usec) : 500=0.01% 00:22:50.311 lat (msec) : 10=0.61%, 20=64.91%, 50=34.47% 00:22:50.311 cpu : usr=3.40%, sys=11.30%, ctx=232, majf=0, minf=1 00:22:50.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:50.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.311 issued rwts: total=3584,3745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.311 job1: (groupid=0, jobs=1): err= 0: pid=71731: Thu Apr 18 09:51:13 2024 00:22:50.311 read: IOPS=1949, BW=7797KiB/s (7984kB/s)(7820KiB/1003msec) 00:22:50.311 slat (usec): min=5, max=14597, avg=210.63, stdev=1003.65 00:22:50.311 clat (usec): min=613, max=73520, avg=25389.43, stdev=11889.80 00:22:50.311 lat (usec): min=2865, max=73543, avg=25600.06, stdev=12016.84 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[ 3228], 5.00th=[14484], 10.00th=[16319], 20.00th=[17171], 00:22:50.311 | 30.00th=[17695], 40.00th=[17957], 50.00th=[23987], 60.00th=[26084], 00:22:50.311 | 70.00th=[28705], 80.00th=[29754], 90.00th=[42730], 95.00th=[52167], 00:22:50.311 | 99.00th=[66323], 99.50th=[69731], 99.90th=[71828], 99.95th=[73925], 00:22:50.311 | 99.99th=[73925] 00:22:50.311 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:22:50.311 slat (usec): min=14, max=7770, avg=278.18, stdev=1075.81 00:22:50.311 clat (usec): min=14762, max=80946, avg=37363.58, stdev=20166.17 00:22:50.311 lat (usec): min=14787, max=80993, avg=37641.76, stdev=20292.34 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[15795], 5.00th=[16057], 10.00th=[16450], 20.00th=[17433], 00:22:50.311 | 30.00th=[20841], 40.00th=[24249], 50.00th=[25822], 60.00th=[42730], 00:22:50.311 | 70.00th=[54264], 80.00th=[60556], 90.00th=[66847], 95.00th=[70779], 00:22:50.311 | 99.00th=[78119], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:22:50.311 | 99.99th=[81265] 00:22:50.311 bw ( KiB/s): min= 8192, max= 8192, per=17.77%, avg=8192.00, stdev= 0.00, samples=2 00:22:50.311 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:22:50.311 lat (usec) : 750=0.02% 00:22:50.311 lat (msec) : 4=0.50%, 10=1.05%, 20=34.27%, 50=43.17%, 100=20.98% 00:22:50.311 cpu : usr=2.40%, sys=6.79%, ctx=270, majf=0, minf=5 00:22:50.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:50.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.311 issued rwts: total=1955,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.311 job2: (groupid=0, jobs=1): err= 0: pid=71732: Thu Apr 18 09:51:13 2024 00:22:50.311 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:22:50.311 slat (usec): min=3, max=9917, avg=150.21, stdev=807.02 00:22:50.311 clat (usec): min=11570, max=40871, avg=20233.35, stdev=6991.05 00:22:50.311 lat (usec): min=14442, max=40894, avg=20383.57, stdev=6993.62 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[12649], 5.00th=[14746], 10.00th=[15008], 20.00th=[15270], 00:22:50.311 | 30.00th=[15664], 40.00th=[17957], 50.00th=[18744], 60.00th=[19006], 00:22:50.311 | 70.00th=[19530], 80.00th=[20841], 90.00th=[36439], 95.00th=[38011], 00:22:50.311 | 99.00th=[39584], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:22:50.311 | 99.99th=[40633] 00:22:50.311 write: IOPS=3474, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1003msec); 0 zone resets 00:22:50.311 slat (usec): min=8, max=8547, avg=146.83, stdev=707.78 00:22:50.311 clat (usec): min=618, max=41855, avg=18446.77, stdev=5842.47 00:22:50.311 lat (usec): min=8893, max=41883, avg=18593.60, stdev=5854.52 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[11338], 5.00th=[14091], 10.00th=[14484], 20.00th=[14746], 00:22:50.311 | 30.00th=[15139], 40.00th=[15533], 50.00th=[16909], 60.00th=[17695], 00:22:50.311 | 70.00th=[18220], 80.00th=[19792], 90.00th=[22676], 95.00th=[34866], 00:22:50.311 | 99.00th=[36963], 99.50th=[38011], 99.90th=[39584], 99.95th=[40633], 00:22:50.311 | 99.99th=[41681] 00:22:50.311 bw ( KiB/s): min=10472, max=16416, per=29.16%, avg=13444.00, stdev=4203.04, samples=2 00:22:50.311 iops : min= 2618, max= 4104, avg=3361.00, stdev=1050.76, samples=2 00:22:50.311 lat (usec) : 750=0.02% 00:22:50.311 lat (msec) : 10=0.49%, 20=77.76%, 50=21.73% 00:22:50.311 cpu : usr=3.09%, sys=10.48%, ctx=219, majf=0, minf=4 00:22:50.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:50.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.311 issued rwts: total=3072,3485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.311 job3: (groupid=0, jobs=1): err= 0: pid=71733: Thu Apr 18 09:51:13 2024 00:22:50.311 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:22:50.311 slat (usec): min=6, max=13049, avg=241.83, stdev=1278.80 00:22:50.311 clat (usec): min=15443, max=52221, avg=31184.77, stdev=7004.40 00:22:50.311 lat (usec): min=18761, max=52248, avg=31426.59, stdev=6962.90 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[18744], 5.00th=[21103], 10.00th=[21890], 20.00th=[23200], 00:22:50.311 | 30.00th=[26608], 40.00th=[28705], 50.00th=[32113], 60.00th=[34341], 00:22:50.311 | 70.00th=[35914], 80.00th=[37487], 90.00th=[39060], 95.00th=[41157], 00:22:50.311 | 99.00th=[48497], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:22:50.311 | 99.99th=[52167] 00:22:50.311 write: IOPS=2285, BW=9143KiB/s (9363kB/s)(9180KiB/1004msec); 0 zone resets 00:22:50.311 slat (usec): min=11, max=10432, avg=212.22, stdev=1076.99 00:22:50.311 clat (usec): min=1959, max=42181, avg=27071.60, stdev=6321.13 00:22:50.311 lat (usec): min=9491, max=42276, avg=27283.82, stdev=6263.59 00:22:50.311 clat percentiles (usec): 00:22:50.311 | 1.00th=[10028], 5.00th=[17695], 10.00th=[18220], 20.00th=[19006], 00:22:50.311 | 30.00th=[24511], 40.00th=[26608], 50.00th=[27657], 60.00th=[30016], 00:22:50.311 | 70.00th=[31851], 80.00th=[33162], 90.00th=[34341], 95.00th=[35914], 00:22:50.311 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:22:50.311 | 99.99th=[42206] 00:22:50.311 bw ( KiB/s): min= 7344, max=10012, per=18.82%, avg=8678.00, stdev=1886.56, samples=2 00:22:50.311 iops : min= 1836, max= 2503, avg=2169.50, stdev=471.64, samples=2 00:22:50.311 lat (msec) : 2=0.02%, 10=0.48%, 20=13.24%, 50=85.93%, 100=0.32% 00:22:50.311 cpu : usr=1.69%, sys=7.48%, ctx=154, majf=0, minf=11 00:22:50.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:50.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.311 issued rwts: total=2048,2295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.311 00:22:50.311 Run status group 0 (all jobs): 00:22:50.311 READ: bw=41.5MiB/s (43.5MB/s), 7797KiB/s-14.0MiB/s (7984kB/s-14.7MB/s), io=41.6MiB (43.7MB), run=1001-1004msec 00:22:50.311 WRITE: bw=45.0MiB/s (47.2MB/s), 8167KiB/s-14.6MiB/s (8364kB/s-15.3MB/s), io=45.2MiB (47.4MB), run=1001-1004msec 00:22:50.311 00:22:50.311 Disk stats (read/write): 00:22:50.311 nvme0n1: ios=3122/3392, merge=0/0, ticks=12190/12285, in_queue=24475, util=88.68% 00:22:50.311 nvme0n2: ios=1473/1536, merge=0/0, ticks=13882/21277, in_queue=35159, util=88.70% 00:22:50.311 nvme0n3: ios=2976/3072, merge=0/0, ticks=13203/11416, in_queue=24619, util=89.28% 00:22:50.311 nvme0n4: ios=1824/2048, merge=0/0, ticks=13180/12216, in_queue=25396, util=89.33% 00:22:50.311 09:51:13 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:22:50.569 [global] 00:22:50.569 thread=1 00:22:50.569 invalidate=1 00:22:50.569 rw=randwrite 00:22:50.569 time_based=1 00:22:50.569 runtime=1 00:22:50.569 ioengine=libaio 00:22:50.569 direct=1 00:22:50.569 bs=4096 00:22:50.569 iodepth=128 00:22:50.569 norandommap=0 00:22:50.569 numjobs=1 00:22:50.569 00:22:50.569 verify_dump=1 00:22:50.569 verify_backlog=512 00:22:50.569 verify_state_save=0 00:22:50.569 do_verify=1 00:22:50.569 verify=crc32c-intel 00:22:50.569 [job0] 00:22:50.569 filename=/dev/nvme0n1 00:22:50.569 [job1] 00:22:50.569 filename=/dev/nvme0n2 00:22:50.569 [job2] 00:22:50.569 filename=/dev/nvme0n3 00:22:50.569 [job3] 00:22:50.569 filename=/dev/nvme0n4 00:22:50.569 Could not set queue depth (nvme0n1) 00:22:50.569 Could not set queue depth (nvme0n2) 00:22:50.569 Could not set queue depth (nvme0n3) 00:22:50.569 Could not set queue depth (nvme0n4) 00:22:50.569 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:50.569 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:50.569 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:50.569 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:50.569 fio-3.35 00:22:50.569 Starting 4 threads 00:22:51.944 00:22:51.944 job0: (groupid=0, jobs=1): err= 0: pid=71792: Thu Apr 18 09:51:15 2024 00:22:51.944 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:22:51.944 slat (usec): min=4, max=6236, avg=108.69, stdev=509.95 00:22:51.944 clat (usec): min=3126, max=20088, avg=14005.71, stdev=1495.37 00:22:51.944 lat (usec): min=3140, max=24742, avg=14114.39, stdev=1516.48 00:22:51.944 clat percentiles (usec): 00:22:51.944 | 1.00th=[ 7832], 5.00th=[12125], 10.00th=[12649], 20.00th=[13566], 00:22:51.944 | 30.00th=[13829], 40.00th=[13960], 50.00th=[13960], 60.00th=[14222], 00:22:51.944 | 70.00th=[14353], 80.00th=[14484], 90.00th=[15008], 95.00th=[16712], 00:22:51.944 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19792], 99.95th=[19792], 00:22:51.944 | 99.99th=[20055] 00:22:51.944 write: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1002msec); 0 zone resets 00:22:51.944 slat (usec): min=12, max=5782, avg=99.62, stdev=561.47 00:22:51.944 clat (usec): min=797, max=20514, avg=13424.92, stdev=1524.60 00:22:51.944 lat (usec): min=2679, max=20531, avg=13524.55, stdev=1609.27 00:22:51.944 clat percentiles (usec): 00:22:51.944 | 1.00th=[ 9110], 5.00th=[11469], 10.00th=[12387], 20.00th=[12780], 00:22:51.944 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:22:51.944 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15008], 95.00th=[15926], 00:22:51.944 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20579], 99.95th=[20579], 00:22:51.944 | 99.99th=[20579] 00:22:51.944 bw ( KiB/s): min=20072, max=20072, per=38.29%, avg=20072.00, stdev= 0.00, samples=1 00:22:51.944 iops : min= 5018, max= 5018, avg=5018.00, stdev= 0.00, samples=1 00:22:51.944 lat (usec) : 1000=0.01% 00:22:51.944 lat (msec) : 4=0.23%, 10=1.45%, 20=98.22%, 50=0.09% 00:22:51.944 cpu : usr=4.20%, sys=13.09%, ctx=328, majf=0, minf=3 00:22:51.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:51.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.944 issued rwts: total=4608,4625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.944 job1: (groupid=0, jobs=1): err= 0: pid=71793: Thu Apr 18 09:51:15 2024 00:22:51.944 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:22:51.944 slat (usec): min=7, max=12944, avg=236.83, stdev=1059.92 00:22:51.944 clat (usec): min=16908, max=42367, avg=29583.84, stdev=4467.42 00:22:51.944 lat (usec): min=16923, max=48640, avg=29820.67, stdev=4526.08 00:22:51.944 clat percentiles (usec): 00:22:51.944 | 1.00th=[18482], 5.00th=[21890], 10.00th=[24511], 20.00th=[26870], 00:22:51.944 | 30.00th=[28443], 40.00th=[29230], 50.00th=[29754], 60.00th=[30016], 00:22:51.944 | 70.00th=[30540], 80.00th=[31065], 90.00th=[35390], 95.00th=[40109], 00:22:51.944 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:22:51.944 | 99.99th=[42206] 00:22:51.944 write: IOPS=2227, BW=8911KiB/s (9125kB/s)(9000KiB/1010msec); 0 zone resets 00:22:51.944 slat (usec): min=4, max=11529, avg=221.29, stdev=884.66 00:22:51.944 clat (usec): min=8683, max=43461, avg=29025.83, stdev=5336.62 00:22:51.944 lat (usec): min=10177, max=43480, avg=29247.12, stdev=5345.49 00:22:51.944 clat percentiles (usec): 00:22:51.945 | 1.00th=[13698], 5.00th=[19530], 10.00th=[21890], 20.00th=[25822], 00:22:51.945 | 30.00th=[27919], 40.00th=[28181], 50.00th=[29230], 60.00th=[30802], 00:22:51.945 | 70.00th=[31589], 80.00th=[32113], 90.00th=[34866], 95.00th=[37487], 00:22:51.945 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:22:51.945 | 99.99th=[43254] 00:22:51.945 bw ( KiB/s): min= 8192, max= 8784, per=16.19%, avg=8488.00, stdev=418.61, samples=2 00:22:51.945 iops : min= 2048, max= 2196, avg=2122.00, stdev=104.65, samples=2 00:22:51.945 lat (msec) : 10=0.02%, 20=4.21%, 50=95.77% 00:22:51.945 cpu : usr=2.18%, sys=6.05%, ctx=671, majf=0, minf=6 00:22:51.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:51.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.945 issued rwts: total=2048,2250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.945 job2: (groupid=0, jobs=1): err= 0: pid=71794: Thu Apr 18 09:51:15 2024 00:22:51.945 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:22:51.945 slat (usec): min=4, max=14735, avg=241.95, stdev=1087.63 00:22:51.945 clat (usec): min=16518, max=45429, avg=30292.93, stdev=4891.26 00:22:51.945 lat (usec): min=17315, max=45950, avg=30534.88, stdev=4933.29 00:22:51.945 clat percentiles (usec): 00:22:51.945 | 1.00th=[18482], 5.00th=[22414], 10.00th=[24511], 20.00th=[27132], 00:22:51.945 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30016], 60.00th=[30278], 00:22:51.945 | 70.00th=[30540], 80.00th=[33424], 90.00th=[36963], 95.00th=[40109], 00:22:51.945 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:22:51.945 | 99.99th=[45351] 00:22:51.945 write: IOPS=2286, BW=9148KiB/s (9367kB/s)(9276KiB/1014msec); 0 zone resets 00:22:51.945 slat (usec): min=5, max=11579, avg=210.15, stdev=853.53 00:22:51.945 clat (usec): min=11719, max=45156, avg=28622.43, stdev=5420.96 00:22:51.945 lat (usec): min=11734, max=47119, avg=28832.58, stdev=5461.59 00:22:51.945 clat percentiles (usec): 00:22:51.945 | 1.00th=[15795], 5.00th=[18744], 10.00th=[20841], 20.00th=[23462], 00:22:51.945 | 30.00th=[26870], 40.00th=[28181], 50.00th=[29230], 60.00th=[31065], 00:22:51.945 | 70.00th=[32113], 80.00th=[33162], 90.00th=[34866], 95.00th=[35914], 00:22:51.945 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41681], 99.95th=[44303], 00:22:51.945 | 99.99th=[45351] 00:22:51.945 bw ( KiB/s): min= 8598, max= 8920, per=16.71%, avg=8759.00, stdev=227.69, samples=2 00:22:51.945 iops : min= 2149, max= 2230, avg=2189.50, stdev=57.28, samples=2 00:22:51.945 lat (msec) : 20=5.36%, 50=94.64% 00:22:51.945 cpu : usr=2.37%, sys=6.02%, ctx=628, majf=0, minf=3 00:22:51.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:51.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.945 issued rwts: total=2048,2319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.945 job3: (groupid=0, jobs=1): err= 0: pid=71795: Thu Apr 18 09:51:15 2024 00:22:51.945 read: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1007msec) 00:22:51.945 slat (usec): min=5, max=4051, avg=121.52, stdev=592.83 00:22:51.945 clat (usec): min=3681, max=19699, avg=15958.27, stdev=1185.26 00:22:51.945 lat (usec): min=6728, max=19712, avg=16079.79, stdev=1028.43 00:22:51.945 clat percentiles (usec): 00:22:51.945 | 1.00th=[10945], 5.00th=[15270], 10.00th=[15664], 20.00th=[15795], 00:22:51.945 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16057], 60.00th=[16188], 00:22:51.945 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16581], 95.00th=[16909], 00:22:51.945 | 99.00th=[17171], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:22:51.945 | 99.99th=[19792] 00:22:51.945 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:22:51.945 slat (usec): min=8, max=3946, avg=118.22, stdev=533.08 00:22:51.945 clat (usec): min=11659, max=16441, avg=15432.62, stdev=629.63 00:22:51.945 lat (usec): min=12765, max=16637, avg=15550.84, stdev=336.01 00:22:51.945 clat percentiles (usec): 00:22:51.945 | 1.00th=[12256], 5.00th=[14877], 10.00th=[15008], 20.00th=[15139], 00:22:51.945 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:22:51.945 | 70.00th=[15664], 80.00th=[15795], 90.00th=[15926], 95.00th=[16057], 00:22:51.945 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16450], 99.95th=[16450], 00:22:51.945 | 99.99th=[16450] 00:22:51.945 bw ( KiB/s): min=16384, max=16384, per=31.25%, avg=16384.00, stdev= 0.00, samples=2 00:22:51.945 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:22:51.945 lat (msec) : 4=0.01%, 10=0.40%, 20=99.59% 00:22:51.945 cpu : usr=4.08%, sys=10.93%, ctx=254, majf=0, minf=1 00:22:51.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:51.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.945 issued rwts: total=4001,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.945 00:22:51.945 Run status group 0 (all jobs): 00:22:51.945 READ: bw=48.9MiB/s (51.3MB/s), 8079KiB/s-18.0MiB/s (8273kB/s-18.8MB/s), io=49.6MiB (52.0MB), run=1002-1014msec 00:22:51.945 WRITE: bw=51.2MiB/s (53.7MB/s), 8911KiB/s-18.0MiB/s (9125kB/s-18.9MB/s), io=51.9MiB (54.4MB), run=1002-1014msec 00:22:51.945 00:22:51.945 Disk stats (read/write): 00:22:51.945 nvme0n1: ios=3948/4096, merge=0/0, ticks=26624/23311, in_queue=49935, util=88.58% 00:22:51.945 nvme0n2: ios=1612/2048, merge=0/0, ticks=23279/27758, in_queue=51037, util=87.06% 00:22:51.945 nvme0n3: ios=1679/2048, merge=0/0, ticks=24893/27717, in_queue=52610, util=89.51% 00:22:51.945 nvme0n4: ios=3409/3584, merge=0/0, ticks=12472/12172, in_queue=24644, util=89.96% 00:22:51.945 09:51:15 -- target/fio.sh@55 -- # sync 00:22:51.945 09:51:15 -- target/fio.sh@59 -- # fio_pid=71808 00:22:51.945 09:51:15 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:51.945 09:51:15 -- target/fio.sh@61 -- # sleep 3 00:22:51.945 [global] 00:22:51.945 thread=1 00:22:51.945 invalidate=1 00:22:51.945 rw=read 00:22:51.945 time_based=1 00:22:51.945 runtime=10 00:22:51.945 ioengine=libaio 00:22:51.945 direct=1 00:22:51.945 bs=4096 00:22:51.945 iodepth=1 00:22:51.945 norandommap=1 00:22:51.945 numjobs=1 00:22:51.945 00:22:51.945 [job0] 00:22:51.945 filename=/dev/nvme0n1 00:22:51.945 [job1] 00:22:51.945 filename=/dev/nvme0n2 00:22:51.945 [job2] 00:22:51.945 filename=/dev/nvme0n3 00:22:51.945 [job3] 00:22:51.945 filename=/dev/nvme0n4 00:22:51.945 Could not set queue depth (nvme0n1) 00:22:51.945 Could not set queue depth (nvme0n2) 00:22:51.945 Could not set queue depth (nvme0n3) 00:22:51.945 Could not set queue depth (nvme0n4) 00:22:51.945 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:51.945 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:51.945 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:51.945 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:51.945 fio-3.35 00:22:51.945 Starting 4 threads 00:22:55.228 09:51:18 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:55.228 fio: pid=71851, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:55.228 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=50819072, buflen=4096 00:22:55.228 09:51:18 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:55.228 fio: pid=71850, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:55.228 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=34160640, buflen=4096 00:22:55.487 09:51:18 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:55.487 09:51:18 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:55.487 fio: pid=71848, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:55.487 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=62480384, buflen=4096 00:22:55.745 09:51:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:55.745 09:51:19 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:56.004 fio: pid=71849, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:56.004 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=44130304, buflen=4096 00:22:56.004 00:22:56.004 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71848: Thu Apr 18 09:51:19 2024 00:22:56.004 read: IOPS=4416, BW=17.3MiB/s (18.1MB/s)(59.6MiB/3454msec) 00:22:56.004 slat (usec): min=8, max=20700, avg=17.67, stdev=225.25 00:22:56.004 clat (usec): min=174, max=3045, avg=207.45, stdev=38.45 00:22:56.004 lat (usec): min=187, max=20968, avg=225.11, stdev=229.67 00:22:56.004 clat percentiles (usec): 00:22:56.004 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:22:56.004 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:22:56.004 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 269], 00:22:56.004 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 465], 99.95th=[ 578], 00:22:56.004 | 99.99th=[ 1483] 00:22:56.004 bw ( KiB/s): min=17048, max=18688, per=37.27%, avg=18081.33, stdev=570.61, samples=6 00:22:56.004 iops : min= 4262, max= 4672, avg=4520.33, stdev=142.65, samples=6 00:22:56.004 lat (usec) : 250=93.88%, 500=6.05%, 750=0.03%, 1000=0.03% 00:22:56.004 lat (msec) : 2=0.01%, 4=0.01% 00:22:56.004 cpu : usr=1.30%, sys=5.50%, ctx=15264, majf=0, minf=1 00:22:56.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 issued rwts: total=15255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:56.004 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71849: Thu Apr 18 09:51:19 2024 00:22:56.004 read: IOPS=2793, BW=10.9MiB/s (11.4MB/s)(42.1MiB/3857msec) 00:22:56.004 slat (usec): min=11, max=13909, avg=27.28, stdev=226.55 00:22:56.004 clat (usec): min=172, max=4425, avg=328.32, stdev=101.74 00:22:56.004 lat (usec): min=187, max=14154, avg=355.60, stdev=248.64 00:22:56.004 clat percentiles (usec): 00:22:56.004 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 273], 00:22:56.004 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:22:56.004 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 396], 00:22:56.004 | 99.00th=[ 545], 99.50th=[ 635], 99.90th=[ 1254], 99.95th=[ 1467], 00:22:56.004 | 99.99th=[ 3884] 00:22:56.004 bw ( KiB/s): min= 9856, max=13159, per=22.05%, avg=10695.86, stdev=1103.80, samples=7 00:22:56.004 iops : min= 2464, max= 3289, avg=2673.86, stdev=275.67, samples=7 00:22:56.004 lat (usec) : 250=18.74%, 500=79.67%, 750=1.26%, 1000=0.11% 00:22:56.004 lat (msec) : 2=0.18%, 4=0.03%, 10=0.01% 00:22:56.004 cpu : usr=0.93%, sys=5.55%, ctx=10787, majf=0, minf=1 00:22:56.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 issued rwts: total=10775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:56.004 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71850: Thu Apr 18 09:51:19 2024 00:22:56.004 read: IOPS=2601, BW=10.2MiB/s (10.7MB/s)(32.6MiB/3206msec) 00:22:56.004 slat (usec): min=12, max=14011, avg=25.40, stdev=178.82 00:22:56.004 clat (usec): min=194, max=2969, avg=356.69, stdev=64.78 00:22:56.004 lat (usec): min=210, max=14370, avg=382.09, stdev=190.22 00:22:56.004 clat percentiles (usec): 00:22:56.004 | 1.00th=[ 231], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 334], 00:22:56.004 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:22:56.004 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 400], 00:22:56.004 | 99.00th=[ 453], 99.50th=[ 611], 99.90th=[ 1106], 99.95th=[ 1418], 00:22:56.004 | 99.99th=[ 2966] 00:22:56.004 bw ( KiB/s): min=10304, max=10624, per=21.61%, avg=10485.33, stdev=123.90, samples=6 00:22:56.004 iops : min= 2576, max= 2656, avg=2621.33, stdev=30.98, samples=6 00:22:56.004 lat (usec) : 250=1.77%, 500=97.36%, 750=0.58%, 1000=0.11% 00:22:56.004 lat (msec) : 2=0.13%, 4=0.04% 00:22:56.004 cpu : usr=1.37%, sys=5.15%, ctx=8343, majf=0, minf=1 00:22:56.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 issued rwts: total=8341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:56.004 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71851: Thu Apr 18 09:51:19 2024 00:22:56.004 read: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(48.5MiB/2953msec) 00:22:56.004 slat (nsec): min=11694, max=91412, avg=15073.14, stdev=4460.87 00:22:56.004 clat (usec): min=192, max=1766, avg=221.47, stdev=27.05 00:22:56.004 lat (usec): min=205, max=1783, avg=236.54, stdev=28.06 00:22:56.004 clat percentiles (usec): 00:22:56.004 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:22:56.004 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:22:56.004 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:22:56.004 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 371], 99.95th=[ 725], 00:22:56.004 | 99.99th=[ 1549] 00:22:56.004 bw ( KiB/s): min=15640, max=17120, per=34.63%, avg=16800.00, stdev=648.79, samples=5 00:22:56.004 iops : min= 3910, max= 4280, avg=4200.00, stdev=162.20, samples=5 00:22:56.004 lat (usec) : 250=95.92%, 500=4.01%, 750=0.02%, 1000=0.02% 00:22:56.004 lat (msec) : 2=0.02% 00:22:56.004 cpu : usr=1.19%, sys=5.69%, ctx=12408, majf=0, minf=1 00:22:56.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.004 issued rwts: total=12408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:56.004 00:22:56.004 Run status group 0 (all jobs): 00:22:56.004 READ: bw=47.4MiB/s (49.7MB/s), 10.2MiB/s-17.3MiB/s (10.7MB/s-18.1MB/s), io=183MiB (192MB), run=2953-3857msec 00:22:56.004 00:22:56.004 Disk stats (read/write): 00:22:56.004 nvme0n1: ios=14977/0, merge=0/0, ticks=3117/0, in_queue=3117, util=94.96% 00:22:56.004 nvme0n2: ios=9733/0, merge=0/0, ticks=3381/0, in_queue=3381, util=95.42% 00:22:56.004 nvme0n3: ios=8104/0, merge=0/0, ticks=2910/0, in_queue=2910, util=96.15% 00:22:56.004 nvme0n4: ios=12049/0, merge=0/0, ticks=2726/0, in_queue=2726, util=96.70% 00:22:56.262 09:51:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:56.262 09:51:19 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:56.519 09:51:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:56.519 09:51:19 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:57.085 09:51:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:57.085 09:51:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:57.344 09:51:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:57.344 09:51:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:57.910 09:51:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:57.910 09:51:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:58.477 09:51:21 -- target/fio.sh@69 -- # fio_status=0 00:22:58.477 09:51:21 -- target/fio.sh@70 -- # wait 71808 00:22:58.477 09:51:21 -- target/fio.sh@70 -- # fio_status=4 00:22:58.477 09:51:21 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:58.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:58.477 09:51:21 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:58.477 09:51:21 -- common/autotest_common.sh@1205 -- # local i=0 00:22:58.477 09:51:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:58.477 09:51:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:58.477 09:51:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:58.477 09:51:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:58.477 nvmf hotplug test: fio failed as expected 00:22:58.478 09:51:21 -- common/autotest_common.sh@1217 -- # return 0 00:22:58.478 09:51:21 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:58.478 09:51:21 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:58.478 09:51:21 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.736 09:51:21 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:58.736 09:51:21 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:58.736 09:51:21 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:58.736 09:51:21 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:58.736 09:51:21 -- target/fio.sh@91 -- # nvmftestfini 00:22:58.736 09:51:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:58.736 09:51:21 -- nvmf/common.sh@117 -- # sync 00:22:58.736 09:51:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.736 09:51:21 -- nvmf/common.sh@120 -- # set +e 00:22:58.736 09:51:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.736 09:51:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.736 rmmod nvme_tcp 00:22:58.736 rmmod nvme_fabrics 00:22:58.736 rmmod nvme_keyring 00:22:58.736 09:51:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.736 09:51:21 -- nvmf/common.sh@124 -- # set -e 00:22:58.736 09:51:21 -- nvmf/common.sh@125 -- # return 0 00:22:58.736 09:51:21 -- nvmf/common.sh@478 -- # '[' -n 71421 ']' 00:22:58.736 09:51:21 -- nvmf/common.sh@479 -- # killprocess 71421 00:22:58.736 09:51:21 -- common/autotest_common.sh@936 -- # '[' -z 71421 ']' 00:22:58.736 09:51:21 -- common/autotest_common.sh@940 -- # kill -0 71421 00:22:58.736 09:51:21 -- common/autotest_common.sh@941 -- # uname 00:22:58.736 09:51:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:58.736 09:51:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71421 00:22:58.736 killing process with pid 71421 00:22:58.736 09:51:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:58.736 09:51:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:58.736 09:51:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71421' 00:22:58.736 09:51:22 -- common/autotest_common.sh@955 -- # kill 71421 00:22:58.736 09:51:22 -- common/autotest_common.sh@960 -- # wait 71421 00:23:00.112 09:51:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:00.112 09:51:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:00.112 09:51:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:00.112 09:51:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.112 09:51:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.112 09:51:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.112 09:51:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.112 09:51:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.112 09:51:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:00.112 00:23:00.112 real 0m22.171s 00:23:00.112 user 1m22.163s 00:23:00.112 sys 0m10.260s 00:23:00.112 09:51:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:00.112 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.112 ************************************ 00:23:00.112 END TEST nvmf_fio_target 00:23:00.112 ************************************ 00:23:00.112 09:51:23 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:00.112 09:51:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:00.112 09:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.112 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.112 ************************************ 00:23:00.112 START TEST nvmf_bdevio 00:23:00.112 ************************************ 00:23:00.112 09:51:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:00.112 * Looking for test storage... 00:23:00.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:00.112 09:51:23 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:00.112 09:51:23 -- nvmf/common.sh@7 -- # uname -s 00:23:00.112 09:51:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.112 09:51:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.112 09:51:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.112 09:51:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.112 09:51:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.112 09:51:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.112 09:51:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.112 09:51:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.112 09:51:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.112 09:51:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.112 09:51:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:23:00.112 09:51:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:23:00.112 09:51:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.112 09:51:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.112 09:51:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:00.112 09:51:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.112 09:51:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.112 09:51:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.112 09:51:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.112 09:51:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.112 09:51:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.112 09:51:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.112 09:51:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.112 09:51:23 -- paths/export.sh@5 -- # export PATH 00:23:00.112 09:51:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.112 09:51:23 -- nvmf/common.sh@47 -- # : 0 00:23:00.112 09:51:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.112 09:51:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.112 09:51:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.112 09:51:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.112 09:51:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.112 09:51:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.112 09:51:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.112 09:51:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.112 09:51:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.112 09:51:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.112 09:51:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:00.112 09:51:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:00.112 09:51:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.112 09:51:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:00.112 09:51:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:00.112 09:51:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:00.112 09:51:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.113 09:51:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.113 09:51:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.113 09:51:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:00.113 09:51:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:00.113 09:51:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:00.113 09:51:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:00.113 09:51:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:00.113 09:51:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:00.113 09:51:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.113 09:51:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.113 09:51:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:00.113 09:51:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:00.113 09:51:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:00.113 09:51:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:00.113 09:51:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:00.113 09:51:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.113 09:51:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:00.113 09:51:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:00.113 09:51:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:00.113 09:51:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:00.113 09:51:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:00.113 09:51:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:00.371 Cannot find device "nvmf_tgt_br" 00:23:00.371 09:51:23 -- nvmf/common.sh@155 -- # true 00:23:00.371 09:51:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.371 Cannot find device "nvmf_tgt_br2" 00:23:00.371 09:51:23 -- nvmf/common.sh@156 -- # true 00:23:00.371 09:51:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:00.371 09:51:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:00.371 Cannot find device "nvmf_tgt_br" 00:23:00.371 09:51:23 -- nvmf/common.sh@158 -- # true 00:23:00.371 09:51:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:00.371 Cannot find device "nvmf_tgt_br2" 00:23:00.371 09:51:23 -- nvmf/common.sh@159 -- # true 00:23:00.371 09:51:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:00.371 09:51:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:00.371 09:51:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.371 09:51:23 -- nvmf/common.sh@162 -- # true 00:23:00.371 09:51:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.371 09:51:23 -- nvmf/common.sh@163 -- # true 00:23:00.371 09:51:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:00.371 09:51:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:00.371 09:51:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:00.371 09:51:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:00.371 09:51:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:00.371 09:51:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:00.371 09:51:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:00.371 09:51:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:00.371 09:51:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:00.371 09:51:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:00.371 09:51:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:00.371 09:51:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:00.371 09:51:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:00.371 09:51:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:00.371 09:51:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:00.371 09:51:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:00.371 09:51:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:00.371 09:51:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:00.371 09:51:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:00.371 09:51:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:00.629 09:51:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:00.629 09:51:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:00.629 09:51:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:00.629 09:51:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:00.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:00.629 00:23:00.629 --- 10.0.0.2 ping statistics --- 00:23:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.629 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:00.629 09:51:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:00.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:00.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:23:00.629 00:23:00.629 --- 10.0.0.3 ping statistics --- 00:23:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.629 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:00.629 09:51:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:00.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:00.629 00:23:00.629 --- 10.0.0.1 ping statistics --- 00:23:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.629 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:00.629 09:51:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.629 09:51:23 -- nvmf/common.sh@422 -- # return 0 00:23:00.629 09:51:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:00.629 09:51:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.629 09:51:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:00.629 09:51:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:00.630 09:51:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.630 09:51:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:00.630 09:51:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:00.630 09:51:23 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:00.630 09:51:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:00.630 09:51:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:00.630 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.630 09:51:23 -- nvmf/common.sh@470 -- # nvmfpid=72141 00:23:00.630 09:51:23 -- nvmf/common.sh@471 -- # waitforlisten 72141 00:23:00.630 09:51:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:23:00.630 09:51:23 -- common/autotest_common.sh@817 -- # '[' -z 72141 ']' 00:23:00.630 09:51:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.630 09:51:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:00.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.630 09:51:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.630 09:51:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:00.630 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.630 [2024-04-18 09:51:23.913885] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:00.630 [2024-04-18 09:51:23.914029] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.888 [2024-04-18 09:51:24.083728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.146 [2024-04-18 09:51:24.375403] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.146 [2024-04-18 09:51:24.375466] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.146 [2024-04-18 09:51:24.375488] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.146 [2024-04-18 09:51:24.375502] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.146 [2024-04-18 09:51:24.375516] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.146 [2024-04-18 09:51:24.375991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.146 [2024-04-18 09:51:24.376398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:01.146 [2024-04-18 09:51:24.376508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.146 [2024-04-18 09:51:24.376526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:01.404 09:51:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:01.404 09:51:24 -- common/autotest_common.sh@850 -- # return 0 00:23:01.404 09:51:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:01.404 09:51:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.404 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:23:01.663 09:51:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.663 09:51:24 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.663 09:51:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.663 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:23:01.663 [2024-04-18 09:51:24.815900] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.663 09:51:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.663 09:51:24 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:01.663 09:51:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.663 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:23:01.663 Malloc0 00:23:01.663 09:51:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.663 09:51:24 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:01.663 09:51:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.663 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:23:01.663 09:51:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.663 09:51:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.663 09:51:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.663 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:23:01.663 09:51:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.663 09:51:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.663 09:51:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.663 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:23:01.663 [2024-04-18 09:51:24.933172] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.663 09:51:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.663 09:51:24 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:23:01.663 09:51:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:01.663 09:51:24 -- nvmf/common.sh@521 -- # config=() 00:23:01.663 09:51:24 -- nvmf/common.sh@521 -- # local subsystem config 00:23:01.663 09:51:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:01.663 09:51:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:01.663 { 00:23:01.663 "params": { 00:23:01.663 "name": "Nvme$subsystem", 00:23:01.663 "trtype": "$TEST_TRANSPORT", 00:23:01.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.663 "adrfam": "ipv4", 00:23:01.663 "trsvcid": "$NVMF_PORT", 00:23:01.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.663 "hdgst": ${hdgst:-false}, 00:23:01.663 "ddgst": ${ddgst:-false} 00:23:01.663 }, 00:23:01.663 "method": "bdev_nvme_attach_controller" 00:23:01.663 } 00:23:01.663 EOF 00:23:01.663 )") 00:23:01.663 09:51:24 -- nvmf/common.sh@543 -- # cat 00:23:01.663 09:51:24 -- nvmf/common.sh@545 -- # jq . 00:23:01.663 09:51:24 -- nvmf/common.sh@546 -- # IFS=, 00:23:01.663 09:51:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:01.663 "params": { 00:23:01.663 "name": "Nvme1", 00:23:01.663 "trtype": "tcp", 00:23:01.663 "traddr": "10.0.0.2", 00:23:01.663 "adrfam": "ipv4", 00:23:01.663 "trsvcid": "4420", 00:23:01.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.663 "hdgst": false, 00:23:01.663 "ddgst": false 00:23:01.663 }, 00:23:01.663 "method": "bdev_nvme_attach_controller" 00:23:01.663 }' 00:23:01.663 [2024-04-18 09:51:25.037518] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:01.663 [2024-04-18 09:51:25.037673] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72179 ] 00:23:01.923 [2024-04-18 09:51:25.225448] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:02.182 [2024-04-18 09:51:25.510164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.182 [2024-04-18 09:51:25.510252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.182 [2024-04-18 09:51:25.510983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.750 I/O targets: 00:23:02.750 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:02.750 00:23:02.750 00:23:02.750 CUnit - A unit testing framework for C - Version 2.1-3 00:23:02.750 http://cunit.sourceforge.net/ 00:23:02.750 00:23:02.750 00:23:02.750 Suite: bdevio tests on: Nvme1n1 00:23:02.750 Test: blockdev write read block ...passed 00:23:02.750 Test: blockdev write zeroes read block ...passed 00:23:02.750 Test: blockdev write zeroes read no split ...passed 00:23:02.750 Test: blockdev write zeroes read split ...passed 00:23:02.750 Test: blockdev write zeroes read split partial ...passed 00:23:02.750 Test: blockdev reset ...[2024-04-18 09:51:25.991390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.750 [2024-04-18 09:51:25.991566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:23:02.750 [2024-04-18 09:51:26.005012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:02.750 passed 00:23:02.750 Test: blockdev write read 8 blocks ...passed 00:23:02.750 Test: blockdev write read size > 128k ...passed 00:23:02.750 Test: blockdev write read invalid size ...passed 00:23:02.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.750 Test: blockdev write read max offset ...passed 00:23:02.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.750 Test: blockdev writev readv 8 blocks ...passed 00:23:02.750 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.750 Test: blockdev writev readv block ...passed 00:23:02.750 Test: blockdev writev readv size > 128k ...passed 00:23:02.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.750 Test: blockdev comparev and writev ...[2024-04-18 09:51:26.019993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.020453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.020703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.020903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.021610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.021874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.022168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.022409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.023109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.023447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.023586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.023671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.024342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 [2024-04-18 09:51:26.024465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.024568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.750 passed 00:23:02.750 Test: blockdev nvme passthru rw ...[2024-04-18 09:51:26.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:02.750 passed 00:23:02.750 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:51:26.026210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.750 [2024-04-18 09:51:26.026350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.026612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.750 [2024-04-18 09:51:26.026731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.027002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.750 [2024-04-18 09:51:26.027120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:02.750 [2024-04-18 09:51:26.027401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.750 [2024-04-18 09:51:26.027506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:02.750 passed 00:23:02.750 Test: blockdev nvme admin passthru ...passed 00:23:02.750 Test: blockdev copy ...passed 00:23:02.750 00:23:02.750 Run Summary: Type Total Ran Passed Failed Inactive 00:23:02.750 suites 1 1 n/a 0 0 00:23:02.750 tests 23 23 23 0 0 00:23:02.750 asserts 152 152 152 0 n/a 00:23:02.750 00:23:02.750 Elapsed time = 0.310 seconds 00:23:04.142 09:51:27 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.142 09:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.142 09:51:27 -- common/autotest_common.sh@10 -- # set +x 00:23:04.142 09:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.142 09:51:27 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:04.142 09:51:27 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:04.142 09:51:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:04.142 09:51:27 -- nvmf/common.sh@117 -- # sync 00:23:04.142 09:51:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.142 09:51:27 -- nvmf/common.sh@120 -- # set +e 00:23:04.142 09:51:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.142 09:51:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.142 rmmod nvme_tcp 00:23:04.142 rmmod nvme_fabrics 00:23:04.142 rmmod nvme_keyring 00:23:04.142 09:51:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.142 09:51:27 -- nvmf/common.sh@124 -- # set -e 00:23:04.142 09:51:27 -- nvmf/common.sh@125 -- # return 0 00:23:04.142 09:51:27 -- nvmf/common.sh@478 -- # '[' -n 72141 ']' 00:23:04.142 09:51:27 -- nvmf/common.sh@479 -- # killprocess 72141 00:23:04.142 09:51:27 -- common/autotest_common.sh@936 -- # '[' -z 72141 ']' 00:23:04.142 09:51:27 -- common/autotest_common.sh@940 -- # kill -0 72141 00:23:04.142 09:51:27 -- common/autotest_common.sh@941 -- # uname 00:23:04.142 09:51:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.142 09:51:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72141 00:23:04.142 09:51:27 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:23:04.142 09:51:27 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:23:04.142 killing process with pid 72141 00:23:04.142 09:51:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72141' 00:23:04.142 09:51:27 -- common/autotest_common.sh@955 -- # kill 72141 00:23:04.142 09:51:27 -- common/autotest_common.sh@960 -- # wait 72141 00:23:05.536 09:51:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:05.536 09:51:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:05.536 09:51:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:05.536 09:51:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.536 09:51:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.536 09:51:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.536 09:51:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.536 09:51:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.536 09:51:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:05.536 ************************************ 00:23:05.536 END TEST nvmf_bdevio 00:23:05.536 ************************************ 00:23:05.536 00:23:05.536 real 0m5.325s 00:23:05.536 user 0m20.269s 00:23:05.536 sys 0m1.029s 00:23:05.536 09:51:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:05.536 09:51:28 -- common/autotest_common.sh@10 -- # set +x 00:23:05.536 09:51:28 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:23:05.536 09:51:28 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:05.536 09:51:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:05.536 09:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.536 09:51:28 -- common/autotest_common.sh@10 -- # set +x 00:23:05.536 ************************************ 00:23:05.536 START TEST nvmf_bdevio_no_huge 00:23:05.536 ************************************ 00:23:05.536 09:51:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:05.536 * Looking for test storage... 00:23:05.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:05.536 09:51:28 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.536 09:51:28 -- nvmf/common.sh@7 -- # uname -s 00:23:05.536 09:51:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.536 09:51:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.536 09:51:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.536 09:51:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.536 09:51:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.536 09:51:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.536 09:51:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.536 09:51:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.536 09:51:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.536 09:51:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.536 09:51:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:23:05.536 09:51:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:23:05.537 09:51:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.537 09:51:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.537 09:51:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.537 09:51:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.537 09:51:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.537 09:51:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.537 09:51:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.537 09:51:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.537 09:51:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.537 09:51:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.537 09:51:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.537 09:51:28 -- paths/export.sh@5 -- # export PATH 00:23:05.537 09:51:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.537 09:51:28 -- nvmf/common.sh@47 -- # : 0 00:23:05.537 09:51:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.537 09:51:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.537 09:51:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.537 09:51:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.537 09:51:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.537 09:51:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.537 09:51:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.537 09:51:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.537 09:51:28 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.537 09:51:28 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.537 09:51:28 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:05.537 09:51:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:05.537 09:51:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.537 09:51:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:05.537 09:51:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:05.537 09:51:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:05.537 09:51:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.537 09:51:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.537 09:51:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.537 09:51:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:05.537 09:51:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:05.537 09:51:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:05.537 09:51:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:05.537 09:51:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:05.537 09:51:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:05.537 09:51:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.537 09:51:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.537 09:51:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:05.537 09:51:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:05.537 09:51:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.537 09:51:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.537 09:51:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.537 09:51:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.537 09:51:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.537 09:51:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.537 09:51:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.537 09:51:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.537 09:51:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:05.537 09:51:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:05.795 Cannot find device "nvmf_tgt_br" 00:23:05.795 09:51:28 -- nvmf/common.sh@155 -- # true 00:23:05.795 09:51:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.795 Cannot find device "nvmf_tgt_br2" 00:23:05.795 09:51:28 -- nvmf/common.sh@156 -- # true 00:23:05.795 09:51:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:05.795 09:51:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:05.795 Cannot find device "nvmf_tgt_br" 00:23:05.795 09:51:28 -- nvmf/common.sh@158 -- # true 00:23:05.795 09:51:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:05.795 Cannot find device "nvmf_tgt_br2" 00:23:05.795 09:51:28 -- nvmf/common.sh@159 -- # true 00:23:05.795 09:51:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.795 09:51:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.795 09:51:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.795 09:51:29 -- nvmf/common.sh@162 -- # true 00:23:05.795 09:51:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.795 09:51:29 -- nvmf/common.sh@163 -- # true 00:23:05.795 09:51:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.795 09:51:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.795 09:51:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.795 09:51:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.795 09:51:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.795 09:51:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.795 09:51:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.795 09:51:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.795 09:51:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.795 09:51:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.795 09:51:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.795 09:51:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.795 09:51:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.795 09:51:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.795 09:51:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.795 09:51:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.795 09:51:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.795 09:51:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.795 09:51:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.053 09:51:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:06.053 09:51:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:06.053 09:51:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:06.053 09:51:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.053 09:51:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:06.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:06.053 00:23:06.053 --- 10.0.0.2 ping statistics --- 00:23:06.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.053 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:06.053 09:51:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:06.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:06.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:23:06.053 00:23:06.053 --- 10.0.0.3 ping statistics --- 00:23:06.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.053 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:06.053 09:51:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:23:06.053 00:23:06.053 --- 10.0.0.1 ping statistics --- 00:23:06.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.053 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:23:06.053 09:51:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.053 09:51:29 -- nvmf/common.sh@422 -- # return 0 00:23:06.053 09:51:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:06.053 09:51:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.053 09:51:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:06.053 09:51:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:06.053 09:51:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.053 09:51:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:06.053 09:51:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:06.053 09:51:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:06.053 09:51:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:06.053 09:51:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:06.053 09:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.053 09:51:29 -- nvmf/common.sh@470 -- # nvmfpid=72410 00:23:06.053 09:51:29 -- nvmf/common.sh@471 -- # waitforlisten 72410 00:23:06.053 09:51:29 -- common/autotest_common.sh@817 -- # '[' -z 72410 ']' 00:23:06.053 09:51:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:06.053 09:51:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.053 09:51:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.053 09:51:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.053 09:51:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.053 09:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.053 [2024-04-18 09:51:29.390290] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:06.053 [2024-04-18 09:51:29.390488] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:06.312 [2024-04-18 09:51:29.606054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.570 [2024-04-18 09:51:29.924898] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.570 [2024-04-18 09:51:29.924970] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.570 [2024-04-18 09:51:29.924993] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.570 [2024-04-18 09:51:29.925008] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.570 [2024-04-18 09:51:29.925021] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.570 [2024-04-18 09:51:29.925236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:06.570 [2024-04-18 09:51:29.926157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:06.570 [2024-04-18 09:51:29.926266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.570 [2024-04-18 09:51:29.926270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:07.135 09:51:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.135 09:51:30 -- common/autotest_common.sh@850 -- # return 0 00:23:07.135 09:51:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:07.135 09:51:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:07.135 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 09:51:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.135 09:51:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.135 09:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.135 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 [2024-04-18 09:51:30.316594] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.135 09:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.135 09:51:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:07.135 09:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.135 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 Malloc0 00:23:07.135 09:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.135 09:51:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.135 09:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.135 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 09:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.135 09:51:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.135 09:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.135 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 09:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.135 09:51:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.135 09:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.135 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 [2024-04-18 09:51:30.411516] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.135 09:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.135 09:51:30 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:07.135 09:51:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:07.135 09:51:30 -- nvmf/common.sh@521 -- # config=() 00:23:07.135 09:51:30 -- nvmf/common.sh@521 -- # local subsystem config 00:23:07.135 09:51:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:07.135 09:51:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:07.135 { 00:23:07.135 "params": { 00:23:07.135 "name": "Nvme$subsystem", 00:23:07.135 "trtype": "$TEST_TRANSPORT", 00:23:07.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.135 "adrfam": "ipv4", 00:23:07.135 "trsvcid": "$NVMF_PORT", 00:23:07.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.135 "hdgst": ${hdgst:-false}, 00:23:07.135 "ddgst": ${ddgst:-false} 00:23:07.135 }, 00:23:07.135 "method": "bdev_nvme_attach_controller" 00:23:07.135 } 00:23:07.135 EOF 00:23:07.135 )") 00:23:07.135 09:51:30 -- nvmf/common.sh@543 -- # cat 00:23:07.135 09:51:30 -- nvmf/common.sh@545 -- # jq . 00:23:07.135 09:51:30 -- nvmf/common.sh@546 -- # IFS=, 00:23:07.135 09:51:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:07.135 "params": { 00:23:07.135 "name": "Nvme1", 00:23:07.135 "trtype": "tcp", 00:23:07.135 "traddr": "10.0.0.2", 00:23:07.135 "adrfam": "ipv4", 00:23:07.135 "trsvcid": "4420", 00:23:07.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.135 "hdgst": false, 00:23:07.135 "ddgst": false 00:23:07.135 }, 00:23:07.135 "method": "bdev_nvme_attach_controller" 00:23:07.135 }' 00:23:07.135 [2024-04-18 09:51:30.509181] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:07.135 [2024-04-18 09:51:30.509317] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72446 ] 00:23:07.394 [2024-04-18 09:51:30.750738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.961 [2024-04-18 09:51:31.058707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.961 [2024-04-18 09:51:31.059034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.961 [2024-04-18 09:51:31.059103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.219 I/O targets: 00:23:08.219 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:08.219 00:23:08.219 00:23:08.219 CUnit - A unit testing framework for C - Version 2.1-3 00:23:08.219 http://cunit.sourceforge.net/ 00:23:08.219 00:23:08.219 00:23:08.219 Suite: bdevio tests on: Nvme1n1 00:23:08.219 Test: blockdev write read block ...passed 00:23:08.219 Test: blockdev write zeroes read block ...passed 00:23:08.219 Test: blockdev write zeroes read no split ...passed 00:23:08.219 Test: blockdev write zeroes read split ...passed 00:23:08.219 Test: blockdev write zeroes read split partial ...passed 00:23:08.219 Test: blockdev reset ...[2024-04-18 09:51:31.538031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.219 [2024-04-18 09:51:31.538409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:23:08.219 [2024-04-18 09:51:31.559169] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.219 passed 00:23:08.219 Test: blockdev write read 8 blocks ...passed 00:23:08.219 Test: blockdev write read size > 128k ...passed 00:23:08.219 Test: blockdev write read invalid size ...passed 00:23:08.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:08.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:08.219 Test: blockdev write read max offset ...passed 00:23:08.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:08.219 Test: blockdev writev readv 8 blocks ...passed 00:23:08.219 Test: blockdev writev readv 30 x 1block ...passed 00:23:08.219 Test: blockdev writev readv block ...passed 00:23:08.219 Test: blockdev writev readv size > 128k ...passed 00:23:08.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:08.219 Test: blockdev comparev and writev ...[2024-04-18 09:51:31.573204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.573932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.574957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.575104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.575235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.575331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.575924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.576048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.576305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.576849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.576955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.577049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.219 [2024-04-18 09:51:31.577187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.219 passed 00:23:08.219 Test: blockdev nvme passthru rw ...passed 00:23:08.219 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:51:31.578450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.219 [2024-04-18 09:51:31.578597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.578897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.219 [2024-04-18 09:51:31.579005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.579295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.219 [2024-04-18 09:51:31.579422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.219 [2024-04-18 09:51:31.579690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.219 [2024-04-18 09:51:31.579794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.219 passed 00:23:08.219 Test: blockdev nvme admin passthru ...passed 00:23:08.219 Test: blockdev copy ...passed 00:23:08.219 00:23:08.219 Run Summary: Type Total Ran Passed Failed Inactive 00:23:08.220 suites 1 1 n/a 0 0 00:23:08.220 tests 23 23 23 0 0 00:23:08.220 asserts 152 152 152 0 n/a 00:23:08.220 00:23:08.220 Elapsed time = 0.302 seconds 00:23:09.171 09:51:32 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.171 09:51:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.171 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:23:09.171 09:51:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.171 09:51:32 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:09.171 09:51:32 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:09.171 09:51:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:09.171 09:51:32 -- nvmf/common.sh@117 -- # sync 00:23:09.171 09:51:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.171 09:51:32 -- nvmf/common.sh@120 -- # set +e 00:23:09.171 09:51:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.171 09:51:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.171 rmmod nvme_tcp 00:23:09.171 rmmod nvme_fabrics 00:23:09.171 rmmod nvme_keyring 00:23:09.171 09:51:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.171 09:51:32 -- nvmf/common.sh@124 -- # set -e 00:23:09.171 09:51:32 -- nvmf/common.sh@125 -- # return 0 00:23:09.171 09:51:32 -- nvmf/common.sh@478 -- # '[' -n 72410 ']' 00:23:09.171 09:51:32 -- nvmf/common.sh@479 -- # killprocess 72410 00:23:09.171 09:51:32 -- common/autotest_common.sh@936 -- # '[' -z 72410 ']' 00:23:09.171 09:51:32 -- common/autotest_common.sh@940 -- # kill -0 72410 00:23:09.171 09:51:32 -- common/autotest_common.sh@941 -- # uname 00:23:09.171 09:51:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.171 09:51:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72410 00:23:09.171 09:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:23:09.171 09:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:23:09.171 killing process with pid 72410 00:23:09.171 09:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72410' 00:23:09.171 09:51:32 -- common/autotest_common.sh@955 -- # kill 72410 00:23:09.171 09:51:32 -- common/autotest_common.sh@960 -- # wait 72410 00:23:10.107 09:51:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:10.107 09:51:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:10.107 09:51:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:10.107 09:51:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.107 09:51:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.107 09:51:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.107 09:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.107 09:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.107 09:51:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:10.107 00:23:10.107 real 0m4.576s 00:23:10.107 user 0m16.077s 00:23:10.107 sys 0m1.576s 00:23:10.107 09:51:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:10.107 09:51:33 -- common/autotest_common.sh@10 -- # set +x 00:23:10.107 ************************************ 00:23:10.107 END TEST nvmf_bdevio_no_huge 00:23:10.107 ************************************ 00:23:10.107 09:51:33 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:10.107 09:51:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:10.107 09:51:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.107 09:51:33 -- common/autotest_common.sh@10 -- # set +x 00:23:10.107 ************************************ 00:23:10.107 START TEST nvmf_tls 00:23:10.107 ************************************ 00:23:10.107 09:51:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:10.365 * Looking for test storage... 00:23:10.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:10.365 09:51:33 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:10.365 09:51:33 -- nvmf/common.sh@7 -- # uname -s 00:23:10.365 09:51:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.365 09:51:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.365 09:51:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.365 09:51:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.365 09:51:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.365 09:51:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.365 09:51:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.365 09:51:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.365 09:51:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.365 09:51:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.365 09:51:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:23:10.365 09:51:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:23:10.365 09:51:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.365 09:51:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.365 09:51:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:10.365 09:51:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.365 09:51:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.365 09:51:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.365 09:51:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.365 09:51:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.365 09:51:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.365 09:51:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.365 09:51:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.365 09:51:33 -- paths/export.sh@5 -- # export PATH 00:23:10.365 09:51:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.365 09:51:33 -- nvmf/common.sh@47 -- # : 0 00:23:10.365 09:51:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.365 09:51:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.365 09:51:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.365 09:51:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.365 09:51:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.365 09:51:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.365 09:51:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.365 09:51:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.365 09:51:33 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:10.365 09:51:33 -- target/tls.sh@62 -- # nvmftestinit 00:23:10.365 09:51:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:10.365 09:51:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.365 09:51:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:10.365 09:51:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:10.365 09:51:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:10.365 09:51:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.365 09:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.366 09:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.366 09:51:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:10.366 09:51:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:10.366 09:51:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:10.366 09:51:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:10.366 09:51:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:10.366 09:51:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:10.366 09:51:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.366 09:51:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.366 09:51:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:10.366 09:51:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:10.366 09:51:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:10.366 09:51:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:10.366 09:51:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:10.366 09:51:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.366 09:51:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:10.366 09:51:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:10.366 09:51:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:10.366 09:51:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:10.366 09:51:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:10.366 09:51:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:10.366 Cannot find device "nvmf_tgt_br" 00:23:10.366 09:51:33 -- nvmf/common.sh@155 -- # true 00:23:10.366 09:51:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.366 Cannot find device "nvmf_tgt_br2" 00:23:10.366 09:51:33 -- nvmf/common.sh@156 -- # true 00:23:10.366 09:51:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:10.366 09:51:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:10.366 Cannot find device "nvmf_tgt_br" 00:23:10.366 09:51:33 -- nvmf/common.sh@158 -- # true 00:23:10.366 09:51:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:10.366 Cannot find device "nvmf_tgt_br2" 00:23:10.366 09:51:33 -- nvmf/common.sh@159 -- # true 00:23:10.366 09:51:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:10.366 09:51:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:10.366 09:51:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.366 09:51:33 -- nvmf/common.sh@162 -- # true 00:23:10.366 09:51:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.366 09:51:33 -- nvmf/common.sh@163 -- # true 00:23:10.366 09:51:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.366 09:51:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.366 09:51:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.366 09:51:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.624 09:51:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.624 09:51:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:10.624 09:51:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:10.624 09:51:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:10.624 09:51:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:10.624 09:51:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:10.624 09:51:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:10.624 09:51:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:10.624 09:51:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:10.624 09:51:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:10.624 09:51:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:10.624 09:51:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:10.624 09:51:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:10.624 09:51:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:10.624 09:51:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:10.624 09:51:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:10.624 09:51:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:10.624 09:51:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:10.624 09:51:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:10.624 09:51:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:10.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:23:10.624 00:23:10.624 --- 10.0.0.2 ping statistics --- 00:23:10.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.624 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:10.624 09:51:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:10.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:10.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:10.624 00:23:10.624 --- 10.0.0.3 ping statistics --- 00:23:10.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.624 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:10.624 09:51:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:10.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:10.624 00:23:10.624 --- 10.0.0.1 ping statistics --- 00:23:10.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.624 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:10.624 09:51:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.624 09:51:33 -- nvmf/common.sh@422 -- # return 0 00:23:10.624 09:51:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:10.624 09:51:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.624 09:51:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:10.624 09:51:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:10.624 09:51:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.624 09:51:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:10.624 09:51:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:10.624 09:51:34 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:10.624 09:51:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:10.624 09:51:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:10.624 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.624 09:51:34 -- nvmf/common.sh@470 -- # nvmfpid=72663 00:23:10.624 09:51:34 -- nvmf/common.sh@471 -- # waitforlisten 72663 00:23:10.624 09:51:34 -- common/autotest_common.sh@817 -- # '[' -z 72663 ']' 00:23:10.624 09:51:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.624 09:51:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:10.624 09:51:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.625 09:51:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.625 09:51:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.625 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.882 [2024-04-18 09:51:34.115090] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:10.882 [2024-04-18 09:51:34.115268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.140 [2024-04-18 09:51:34.296337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.398 [2024-04-18 09:51:34.611443] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.398 [2024-04-18 09:51:34.611548] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.398 [2024-04-18 09:51:34.611582] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.398 [2024-04-18 09:51:34.611623] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.398 [2024-04-18 09:51:34.611647] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.398 [2024-04-18 09:51:34.611710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.965 09:51:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.965 09:51:35 -- common/autotest_common.sh@850 -- # return 0 00:23:11.965 09:51:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:11.965 09:51:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:11.965 09:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:11.965 09:51:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.965 09:51:35 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:11.965 09:51:35 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:12.223 true 00:23:12.223 09:51:35 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.223 09:51:35 -- target/tls.sh@73 -- # jq -r .tls_version 00:23:12.482 09:51:35 -- target/tls.sh@73 -- # version=0 00:23:12.482 09:51:35 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:12.482 09:51:35 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:12.740 09:51:35 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.740 09:51:35 -- target/tls.sh@81 -- # jq -r .tls_version 00:23:12.998 09:51:36 -- target/tls.sh@81 -- # version=13 00:23:12.998 09:51:36 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:12.998 09:51:36 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:13.257 09:51:36 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:13.257 09:51:36 -- target/tls.sh@89 -- # jq -r .tls_version 00:23:13.516 09:51:36 -- target/tls.sh@89 -- # version=7 00:23:13.516 09:51:36 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:13.516 09:51:36 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:13.516 09:51:36 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:13.774 09:51:36 -- target/tls.sh@96 -- # ktls=false 00:23:13.774 09:51:36 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:13.774 09:51:36 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:14.032 09:51:37 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:14.032 09:51:37 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:14.291 09:51:37 -- target/tls.sh@104 -- # ktls=true 00:23:14.291 09:51:37 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:14.291 09:51:37 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:14.549 09:51:37 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:14.549 09:51:37 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:14.549 09:51:37 -- target/tls.sh@112 -- # ktls=false 00:23:14.549 09:51:37 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:14.549 09:51:37 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:14.549 09:51:37 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:14.549 09:51:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:14.549 09:51:37 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:14.549 09:51:37 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:14.549 09:51:37 -- nvmf/common.sh@693 -- # digest=1 00:23:14.549 09:51:37 -- nvmf/common.sh@694 -- # python - 00:23:14.808 09:51:37 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:14.808 09:51:37 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:14.808 09:51:37 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:14.808 09:51:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:14.808 09:51:37 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:14.808 09:51:37 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:23:14.808 09:51:37 -- nvmf/common.sh@693 -- # digest=1 00:23:14.808 09:51:37 -- nvmf/common.sh@694 -- # python - 00:23:14.808 09:51:38 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:14.808 09:51:38 -- target/tls.sh@121 -- # mktemp 00:23:14.808 09:51:38 -- target/tls.sh@121 -- # key_path=/tmp/tmp.IkxghsqSJW 00:23:14.808 09:51:38 -- target/tls.sh@122 -- # mktemp 00:23:14.808 09:51:38 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7b4f0eEEsC 00:23:14.808 09:51:38 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:14.808 09:51:38 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:14.808 09:51:38 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.IkxghsqSJW 00:23:14.808 09:51:38 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7b4f0eEEsC 00:23:14.808 09:51:38 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:15.066 09:51:38 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:15.634 09:51:38 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.IkxghsqSJW 00:23:15.634 09:51:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.IkxghsqSJW 00:23:15.634 09:51:38 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:15.893 [2024-04-18 09:51:39.130677] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.893 09:51:39 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.151 09:51:39 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:16.409 [2024-04-18 09:51:39.578906] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.409 [2024-04-18 09:51:39.579246] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.409 09:51:39 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.667 malloc0 00:23:16.667 09:51:39 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:16.925 09:51:40 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IkxghsqSJW 00:23:17.184 [2024-04-18 09:51:40.420410] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:17.184 09:51:40 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IkxghsqSJW 00:23:29.381 Initializing NVMe Controllers 00:23:29.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.381 Initialization complete. Launching workers. 00:23:29.381 ======================================================== 00:23:29.381 Latency(us) 00:23:29.381 Device Information : IOPS MiB/s Average min max 00:23:29.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6315.42 24.67 10137.69 2314.20 12022.78 00:23:29.381 ======================================================== 00:23:29.381 Total : 6315.42 24.67 10137.69 2314.20 12022.78 00:23:29.381 00:23:29.381 09:51:50 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IkxghsqSJW 00:23:29.381 09:51:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.381 09:51:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.381 09:51:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.381 09:51:50 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IkxghsqSJW' 00:23:29.381 09:51:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.381 09:51:50 -- target/tls.sh@28 -- # bdevperf_pid=72907 00:23:29.381 09:51:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.381 09:51:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.381 09:51:50 -- target/tls.sh@31 -- # waitforlisten 72907 /var/tmp/bdevperf.sock 00:23:29.381 09:51:50 -- common/autotest_common.sh@817 -- # '[' -z 72907 ']' 00:23:29.381 09:51:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.381 09:51:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.381 09:51:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.381 09:51:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.381 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:23:29.381 [2024-04-18 09:51:50.894699] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:29.381 [2024-04-18 09:51:50.895182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72907 ] 00:23:29.381 [2024-04-18 09:51:51.057256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.381 [2024-04-18 09:51:51.332155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.381 09:51:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:29.381 09:51:51 -- common/autotest_common.sh@850 -- # return 0 00:23:29.381 09:51:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IkxghsqSJW 00:23:29.381 [2024-04-18 09:51:52.061882] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.381 [2024-04-18 09:51:52.062067] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:29.381 TLSTESTn1 00:23:29.381 09:51:52 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:29.381 Running I/O for 10 seconds... 00:23:39.394 00:23:39.394 Latency(us) 00:23:39.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.394 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:39.394 Verification LBA range: start 0x0 length 0x2000 00:23:39.394 TLSTESTn1 : 10.03 2632.27 10.28 0.00 0.00 48502.60 5630.14 40751.48 00:23:39.394 =================================================================================================================== 00:23:39.394 Total : 2632.27 10.28 0.00 0.00 48502.60 5630.14 40751.48 00:23:39.394 0 00:23:39.394 09:52:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.394 09:52:02 -- target/tls.sh@45 -- # killprocess 72907 00:23:39.394 09:52:02 -- common/autotest_common.sh@936 -- # '[' -z 72907 ']' 00:23:39.394 09:52:02 -- common/autotest_common.sh@940 -- # kill -0 72907 00:23:39.394 09:52:02 -- common/autotest_common.sh@941 -- # uname 00:23:39.394 09:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:39.394 09:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72907 00:23:39.394 killing process with pid 72907 00:23:39.394 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.394 00:23:39.394 Latency(us) 00:23:39.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.394 =================================================================================================================== 00:23:39.394 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.394 09:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:39.394 09:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:39.394 09:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72907' 00:23:39.394 09:52:02 -- common/autotest_common.sh@955 -- # kill 72907 00:23:39.394 [2024-04-18 09:52:02.361167] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:39.394 09:52:02 -- common/autotest_common.sh@960 -- # wait 72907 00:23:40.328 09:52:03 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7b4f0eEEsC 00:23:40.328 09:52:03 -- common/autotest_common.sh@638 -- # local es=0 00:23:40.328 09:52:03 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7b4f0eEEsC 00:23:40.328 09:52:03 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:40.328 09:52:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:40.328 09:52:03 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:40.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.328 09:52:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:40.328 09:52:03 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7b4f0eEEsC 00:23:40.328 09:52:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.328 09:52:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.328 09:52:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.328 09:52:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7b4f0eEEsC' 00:23:40.328 09:52:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.328 09:52:03 -- target/tls.sh@28 -- # bdevperf_pid=73046 00:23:40.328 09:52:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.328 09:52:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.328 09:52:03 -- target/tls.sh@31 -- # waitforlisten 73046 /var/tmp/bdevperf.sock 00:23:40.328 09:52:03 -- common/autotest_common.sh@817 -- # '[' -z 73046 ']' 00:23:40.328 09:52:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.328 09:52:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:40.328 09:52:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.328 09:52:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:40.328 09:52:03 -- common/autotest_common.sh@10 -- # set +x 00:23:40.328 [2024-04-18 09:52:03.619575] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:40.328 [2024-04-18 09:52:03.619963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73046 ] 00:23:40.586 [2024-04-18 09:52:03.786089] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.850 [2024-04-18 09:52:04.026816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.418 09:52:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:41.418 09:52:04 -- common/autotest_common.sh@850 -- # return 0 00:23:41.418 09:52:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7b4f0eEEsC 00:23:41.418 [2024-04-18 09:52:04.717799] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.418 [2024-04-18 09:52:04.718325] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:41.418 [2024-04-18 09:52:04.734248] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:41.418 [2024-04-18 09:52:04.734556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (107): Transport endpoint is not connected 00:23:41.418 [2024-04-18 09:52:04.735518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:23:41.418 [2024-04-18 09:52:04.736504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:41.418 [2024-04-18 09:52:04.736723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:41.418 [2024-04-18 09:52:04.736764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:41.418 request: 00:23:41.418 { 00:23:41.418 "name": "TLSTEST", 00:23:41.418 "trtype": "tcp", 00:23:41.418 "traddr": "10.0.0.2", 00:23:41.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.418 "adrfam": "ipv4", 00:23:41.418 "trsvcid": "4420", 00:23:41.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.418 "psk": "/tmp/tmp.7b4f0eEEsC", 00:23:41.418 "method": "bdev_nvme_attach_controller", 00:23:41.418 "req_id": 1 00:23:41.418 } 00:23:41.418 Got JSON-RPC error response 00:23:41.418 response: 00:23:41.418 { 00:23:41.418 "code": -32602, 00:23:41.418 "message": "Invalid parameters" 00:23:41.418 } 00:23:41.418 09:52:04 -- target/tls.sh@36 -- # killprocess 73046 00:23:41.418 09:52:04 -- common/autotest_common.sh@936 -- # '[' -z 73046 ']' 00:23:41.418 09:52:04 -- common/autotest_common.sh@940 -- # kill -0 73046 00:23:41.418 09:52:04 -- common/autotest_common.sh@941 -- # uname 00:23:41.418 09:52:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.418 09:52:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73046 00:23:41.418 killing process with pid 73046 00:23:41.418 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.418 00:23:41.418 Latency(us) 00:23:41.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.418 =================================================================================================================== 00:23:41.418 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.418 09:52:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:41.418 09:52:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:41.418 09:52:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73046' 00:23:41.418 09:52:04 -- common/autotest_common.sh@955 -- # kill 73046 00:23:41.418 09:52:04 -- common/autotest_common.sh@960 -- # wait 73046 00:23:41.418 [2024-04-18 09:52:04.790027] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:42.796 09:52:06 -- target/tls.sh@37 -- # return 1 00:23:42.796 09:52:06 -- common/autotest_common.sh@641 -- # es=1 00:23:42.796 09:52:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:42.796 09:52:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:42.796 09:52:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:42.796 09:52:06 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IkxghsqSJW 00:23:42.796 09:52:06 -- common/autotest_common.sh@638 -- # local es=0 00:23:42.796 09:52:06 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IkxghsqSJW 00:23:42.796 09:52:06 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:42.796 09:52:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:42.796 09:52:06 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:42.796 09:52:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:42.796 09:52:06 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IkxghsqSJW 00:23:42.796 09:52:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:42.796 09:52:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:42.796 09:52:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:42.796 09:52:06 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IkxghsqSJW' 00:23:42.796 09:52:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:42.796 09:52:06 -- target/tls.sh@28 -- # bdevperf_pid=73087 00:23:42.796 09:52:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:42.796 09:52:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.796 09:52:06 -- target/tls.sh@31 -- # waitforlisten 73087 /var/tmp/bdevperf.sock 00:23:42.796 09:52:06 -- common/autotest_common.sh@817 -- # '[' -z 73087 ']' 00:23:42.796 09:52:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.796 09:52:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:42.796 09:52:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.796 09:52:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:42.796 09:52:06 -- common/autotest_common.sh@10 -- # set +x 00:23:42.796 [2024-04-18 09:52:06.137506] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:42.796 [2024-04-18 09:52:06.137870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73087 ] 00:23:43.055 [2024-04-18 09:52:06.302686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.313 [2024-04-18 09:52:06.539734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.880 09:52:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:43.880 09:52:07 -- common/autotest_common.sh@850 -- # return 0 00:23:43.880 09:52:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.IkxghsqSJW 00:23:44.139 [2024-04-18 09:52:07.353359] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.139 [2024-04-18 09:52:07.353743] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:44.139 [2024-04-18 09:52:07.372466] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:44.139 [2024-04-18 09:52:07.372527] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:44.139 [2024-04-18 09:52:07.372628] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:44.139 [2024-04-18 09:52:07.373471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (107): Transport endpoint is not connected 00:23:44.139 [2024-04-18 09:52:07.374422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:23:44.139 [2024-04-18 09:52:07.375394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.139 [2024-04-18 09:52:07.375480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:44.139 [2024-04-18 09:52:07.375525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.139 request: 00:23:44.139 { 00:23:44.139 "name": "TLSTEST", 00:23:44.139 "trtype": "tcp", 00:23:44.139 "traddr": "10.0.0.2", 00:23:44.139 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.139 "adrfam": "ipv4", 00:23:44.139 "trsvcid": "4420", 00:23:44.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.139 "psk": "/tmp/tmp.IkxghsqSJW", 00:23:44.139 "method": "bdev_nvme_attach_controller", 00:23:44.139 "req_id": 1 00:23:44.139 } 00:23:44.139 Got JSON-RPC error response 00:23:44.139 response: 00:23:44.139 { 00:23:44.139 "code": -32602, 00:23:44.139 "message": "Invalid parameters" 00:23:44.139 } 00:23:44.139 09:52:07 -- target/tls.sh@36 -- # killprocess 73087 00:23:44.139 09:52:07 -- common/autotest_common.sh@936 -- # '[' -z 73087 ']' 00:23:44.139 09:52:07 -- common/autotest_common.sh@940 -- # kill -0 73087 00:23:44.139 09:52:07 -- common/autotest_common.sh@941 -- # uname 00:23:44.139 09:52:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:44.139 09:52:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73087 00:23:44.139 killing process with pid 73087 00:23:44.139 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.139 00:23:44.139 Latency(us) 00:23:44.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.139 =================================================================================================================== 00:23:44.139 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:44.139 09:52:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:44.139 09:52:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:44.139 09:52:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73087' 00:23:44.139 09:52:07 -- common/autotest_common.sh@955 -- # kill 73087 00:23:44.139 09:52:07 -- common/autotest_common.sh@960 -- # wait 73087 00:23:44.139 [2024-04-18 09:52:07.423939] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:45.517 09:52:08 -- target/tls.sh@37 -- # return 1 00:23:45.517 09:52:08 -- common/autotest_common.sh@641 -- # es=1 00:23:45.517 09:52:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:45.517 09:52:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:45.517 09:52:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:45.517 09:52:08 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IkxghsqSJW 00:23:45.517 09:52:08 -- common/autotest_common.sh@638 -- # local es=0 00:23:45.517 09:52:08 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IkxghsqSJW 00:23:45.517 09:52:08 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:45.517 09:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:45.517 09:52:08 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:45.517 09:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:45.517 09:52:08 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IkxghsqSJW 00:23:45.517 09:52:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.517 09:52:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:45.517 09:52:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.517 09:52:08 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IkxghsqSJW' 00:23:45.517 09:52:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.517 09:52:08 -- target/tls.sh@28 -- # bdevperf_pid=73121 00:23:45.517 09:52:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.517 09:52:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.517 09:52:08 -- target/tls.sh@31 -- # waitforlisten 73121 /var/tmp/bdevperf.sock 00:23:45.517 09:52:08 -- common/autotest_common.sh@817 -- # '[' -z 73121 ']' 00:23:45.517 09:52:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.517 09:52:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:45.517 09:52:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.517 09:52:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:45.517 09:52:08 -- common/autotest_common.sh@10 -- # set +x 00:23:45.517 [2024-04-18 09:52:08.782767] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:45.517 [2024-04-18 09:52:08.782946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73121 ] 00:23:45.775 [2024-04-18 09:52:08.955893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.033 [2024-04-18 09:52:09.224741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.600 09:52:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:46.600 09:52:09 -- common/autotest_common.sh@850 -- # return 0 00:23:46.600 09:52:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IkxghsqSJW 00:23:46.600 [2024-04-18 09:52:09.903612] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.600 [2024-04-18 09:52:09.904119] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:46.600 [2024-04-18 09:52:09.914429] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:46.600 [2024-04-18 09:52:09.914679] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:46.600 [2024-04-18 09:52:09.914921] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:46.600 [2024-04-18 09:52:09.915544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (107): Transport endpoint is not connected 00:23:46.600 [2024-04-18 09:52:09.916523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:23:46.600 [2024-04-18 09:52:09.917511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:46.600 [2024-04-18 09:52:09.917557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:46.600 [2024-04-18 09:52:09.917578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:46.600 request: 00:23:46.600 { 00:23:46.600 "name": "TLSTEST", 00:23:46.600 "trtype": "tcp", 00:23:46.600 "traddr": "10.0.0.2", 00:23:46.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.600 "adrfam": "ipv4", 00:23:46.600 "trsvcid": "4420", 00:23:46.600 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.600 "psk": "/tmp/tmp.IkxghsqSJW", 00:23:46.600 "method": "bdev_nvme_attach_controller", 00:23:46.600 "req_id": 1 00:23:46.600 } 00:23:46.600 Got JSON-RPC error response 00:23:46.600 response: 00:23:46.600 { 00:23:46.600 "code": -32602, 00:23:46.600 "message": "Invalid parameters" 00:23:46.600 } 00:23:46.600 09:52:09 -- target/tls.sh@36 -- # killprocess 73121 00:23:46.600 09:52:09 -- common/autotest_common.sh@936 -- # '[' -z 73121 ']' 00:23:46.600 09:52:09 -- common/autotest_common.sh@940 -- # kill -0 73121 00:23:46.600 09:52:09 -- common/autotest_common.sh@941 -- # uname 00:23:46.600 09:52:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:46.600 09:52:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73121 00:23:46.600 09:52:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:46.600 09:52:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:46.600 09:52:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73121' 00:23:46.600 killing process with pid 73121 00:23:46.600 09:52:09 -- common/autotest_common.sh@955 -- # kill 73121 00:23:46.600 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.600 00:23:46.600 Latency(us) 00:23:46.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.600 =================================================================================================================== 00:23:46.600 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:46.600 09:52:09 -- common/autotest_common.sh@960 -- # wait 73121 00:23:46.600 [2024-04-18 09:52:09.968150] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:47.979 09:52:11 -- target/tls.sh@37 -- # return 1 00:23:47.979 09:52:11 -- common/autotest_common.sh@641 -- # es=1 00:23:47.979 09:52:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:47.979 09:52:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:47.979 09:52:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:47.979 09:52:11 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:47.979 09:52:11 -- common/autotest_common.sh@638 -- # local es=0 00:23:47.979 09:52:11 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:47.979 09:52:11 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:47.979 09:52:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:47.979 09:52:11 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:47.979 09:52:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:47.979 09:52:11 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:47.979 09:52:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:47.979 09:52:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:47.979 09:52:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:47.979 09:52:11 -- target/tls.sh@23 -- # psk= 00:23:47.979 09:52:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:47.979 09:52:11 -- target/tls.sh@28 -- # bdevperf_pid=73161 00:23:47.979 09:52:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:47.979 09:52:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:47.979 09:52:11 -- target/tls.sh@31 -- # waitforlisten 73161 /var/tmp/bdevperf.sock 00:23:47.979 09:52:11 -- common/autotest_common.sh@817 -- # '[' -z 73161 ']' 00:23:47.979 09:52:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.979 09:52:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:47.979 09:52:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.979 09:52:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:47.979 09:52:11 -- common/autotest_common.sh@10 -- # set +x 00:23:47.979 [2024-04-18 09:52:11.296100] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:47.979 [2024-04-18 09:52:11.296515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73161 ] 00:23:48.239 [2024-04-18 09:52:11.468793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.497 [2024-04-18 09:52:11.739742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.064 09:52:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:49.064 09:52:12 -- common/autotest_common.sh@850 -- # return 0 00:23:49.064 09:52:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:49.323 [2024-04-18 09:52:12.501390] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:49.323 [2024-04-18 09:52:12.503420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:23:49.323 [2024-04-18 09:52:12.504412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.323 [2024-04-18 09:52:12.504452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:49.323 [2024-04-18 09:52:12.504479] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.323 request: 00:23:49.323 { 00:23:49.323 "name": "TLSTEST", 00:23:49.323 "trtype": "tcp", 00:23:49.323 "traddr": "10.0.0.2", 00:23:49.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.323 "adrfam": "ipv4", 00:23:49.323 "trsvcid": "4420", 00:23:49.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.323 "method": "bdev_nvme_attach_controller", 00:23:49.323 "req_id": 1 00:23:49.323 } 00:23:49.323 Got JSON-RPC error response 00:23:49.323 response: 00:23:49.323 { 00:23:49.323 "code": -32602, 00:23:49.323 "message": "Invalid parameters" 00:23:49.323 } 00:23:49.323 09:52:12 -- target/tls.sh@36 -- # killprocess 73161 00:23:49.323 09:52:12 -- common/autotest_common.sh@936 -- # '[' -z 73161 ']' 00:23:49.323 09:52:12 -- common/autotest_common.sh@940 -- # kill -0 73161 00:23:49.323 09:52:12 -- common/autotest_common.sh@941 -- # uname 00:23:49.323 09:52:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:49.323 09:52:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73161 00:23:49.323 killing process with pid 73161 00:23:49.323 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.323 00:23:49.323 Latency(us) 00:23:49.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.323 =================================================================================================================== 00:23:49.323 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.323 09:52:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:49.323 09:52:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:49.323 09:52:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73161' 00:23:49.323 09:52:12 -- common/autotest_common.sh@955 -- # kill 73161 00:23:49.323 09:52:12 -- common/autotest_common.sh@960 -- # wait 73161 00:23:50.720 09:52:13 -- target/tls.sh@37 -- # return 1 00:23:50.720 09:52:13 -- common/autotest_common.sh@641 -- # es=1 00:23:50.720 09:52:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:50.720 09:52:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:50.720 09:52:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:50.720 09:52:13 -- target/tls.sh@158 -- # killprocess 72663 00:23:50.720 09:52:13 -- common/autotest_common.sh@936 -- # '[' -z 72663 ']' 00:23:50.720 09:52:13 -- common/autotest_common.sh@940 -- # kill -0 72663 00:23:50.720 09:52:13 -- common/autotest_common.sh@941 -- # uname 00:23:50.720 09:52:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:50.720 09:52:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72663 00:23:50.720 killing process with pid 72663 00:23:50.720 09:52:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:50.720 09:52:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:50.720 09:52:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72663' 00:23:50.720 09:52:13 -- common/autotest_common.sh@955 -- # kill 72663 00:23:50.720 [2024-04-18 09:52:13.807630] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:50.720 09:52:13 -- common/autotest_common.sh@960 -- # wait 72663 00:23:52.141 09:52:15 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:52.141 09:52:15 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:52.141 09:52:15 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:52.141 09:52:15 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:52.141 09:52:15 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:52.141 09:52:15 -- nvmf/common.sh@693 -- # digest=2 00:23:52.141 09:52:15 -- nvmf/common.sh@694 -- # python - 00:23:52.141 09:52:15 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:52.141 09:52:15 -- target/tls.sh@160 -- # mktemp 00:23:52.141 09:52:15 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.jUoN3zF1yl 00:23:52.141 09:52:15 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:52.141 09:52:15 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.jUoN3zF1yl 00:23:52.141 09:52:15 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:52.141 09:52:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:52.141 09:52:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:52.141 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:23:52.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.141 09:52:15 -- nvmf/common.sh@470 -- # nvmfpid=73226 00:23:52.141 09:52:15 -- nvmf/common.sh@471 -- # waitforlisten 73226 00:23:52.141 09:52:15 -- common/autotest_common.sh@817 -- # '[' -z 73226 ']' 00:23:52.141 09:52:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.141 09:52:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:52.141 09:52:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:52.141 09:52:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.141 09:52:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:52.141 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:23:52.141 [2024-04-18 09:52:15.329381] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:52.141 [2024-04-18 09:52:15.330284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.141 [2024-04-18 09:52:15.516564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.400 [2024-04-18 09:52:15.760201] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.400 [2024-04-18 09:52:15.760268] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.400 [2024-04-18 09:52:15.760290] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.400 [2024-04-18 09:52:15.760319] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.400 [2024-04-18 09:52:15.760335] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.400 [2024-04-18 09:52:15.760382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.967 09:52:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:52.967 09:52:16 -- common/autotest_common.sh@850 -- # return 0 00:23:52.967 09:52:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:52.967 09:52:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:52.967 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 09:52:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.967 09:52:16 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.jUoN3zF1yl 00:23:52.967 09:52:16 -- target/tls.sh@49 -- # local key=/tmp/tmp.jUoN3zF1yl 00:23:52.967 09:52:16 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.225 [2024-04-18 09:52:16.516119] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.225 09:52:16 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.483 09:52:16 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.740 [2024-04-18 09:52:17.032300] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.740 [2024-04-18 09:52:17.032620] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.740 09:52:17 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:53.998 malloc0 00:23:53.998 09:52:17 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.256 09:52:17 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:23:54.514 [2024-04-18 09:52:17.833233] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:54.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.514 09:52:17 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUoN3zF1yl 00:23:54.514 09:52:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.514 09:52:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.514 09:52:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.514 09:52:17 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jUoN3zF1yl' 00:23:54.514 09:52:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.514 09:52:17 -- target/tls.sh@28 -- # bdevperf_pid=73277 00:23:54.514 09:52:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.514 09:52:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.514 09:52:17 -- target/tls.sh@31 -- # waitforlisten 73277 /var/tmp/bdevperf.sock 00:23:54.514 09:52:17 -- common/autotest_common.sh@817 -- # '[' -z 73277 ']' 00:23:54.514 09:52:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.514 09:52:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:54.514 09:52:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.514 09:52:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:54.514 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:23:54.772 [2024-04-18 09:52:17.935942] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:54.772 [2024-04-18 09:52:17.936372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73277 ] 00:23:54.772 [2024-04-18 09:52:18.101121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.030 [2024-04-18 09:52:18.371325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.597 09:52:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:55.597 09:52:18 -- common/autotest_common.sh@850 -- # return 0 00:23:55.597 09:52:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:23:55.855 [2024-04-18 09:52:19.056930] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.855 [2024-04-18 09:52:19.057547] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.855 TLSTESTn1 00:23:55.855 09:52:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.113 Running I/O for 10 seconds... 00:24:06.086 00:24:06.086 Latency(us) 00:24:06.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.086 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.086 Verification LBA range: start 0x0 length 0x2000 00:24:06.086 TLSTESTn1 : 10.02 2683.41 10.48 0.00 0.00 47618.97 3753.43 39798.23 00:24:06.086 =================================================================================================================== 00:24:06.086 Total : 2683.41 10.48 0.00 0.00 47618.97 3753.43 39798.23 00:24:06.086 0 00:24:06.086 09:52:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.086 09:52:29 -- target/tls.sh@45 -- # killprocess 73277 00:24:06.086 09:52:29 -- common/autotest_common.sh@936 -- # '[' -z 73277 ']' 00:24:06.086 09:52:29 -- common/autotest_common.sh@940 -- # kill -0 73277 00:24:06.086 09:52:29 -- common/autotest_common.sh@941 -- # uname 00:24:06.086 09:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:06.086 09:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73277 00:24:06.086 killing process with pid 73277 00:24:06.086 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.086 00:24:06.086 Latency(us) 00:24:06.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.086 =================================================================================================================== 00:24:06.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.086 09:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:06.086 09:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:06.086 09:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73277' 00:24:06.086 09:52:29 -- common/autotest_common.sh@955 -- # kill 73277 00:24:06.086 [2024-04-18 09:52:29.346890] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:06.086 09:52:29 -- common/autotest_common.sh@960 -- # wait 73277 00:24:07.544 09:52:30 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.jUoN3zF1yl 00:24:07.544 09:52:30 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUoN3zF1yl 00:24:07.544 09:52:30 -- common/autotest_common.sh@638 -- # local es=0 00:24:07.544 09:52:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUoN3zF1yl 00:24:07.544 09:52:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:07.544 09:52:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.544 09:52:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:07.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.544 09:52:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.544 09:52:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUoN3zF1yl 00:24:07.544 09:52:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.544 09:52:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:07.544 09:52:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.544 09:52:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jUoN3zF1yl' 00:24:07.544 09:52:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.544 09:52:30 -- target/tls.sh@28 -- # bdevperf_pid=73423 00:24:07.544 09:52:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.544 09:52:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.544 09:52:30 -- target/tls.sh@31 -- # waitforlisten 73423 /var/tmp/bdevperf.sock 00:24:07.544 09:52:30 -- common/autotest_common.sh@817 -- # '[' -z 73423 ']' 00:24:07.544 09:52:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.544 09:52:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:07.544 09:52:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.544 09:52:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:07.544 09:52:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.544 [2024-04-18 09:52:30.651309] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:07.544 [2024-04-18 09:52:30.651704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73423 ] 00:24:07.544 [2024-04-18 09:52:30.823108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.803 [2024-04-18 09:52:31.062585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.370 09:52:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:08.370 09:52:31 -- common/autotest_common.sh@850 -- # return 0 00:24:08.370 09:52:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:24:08.629 [2024-04-18 09:52:31.772299] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.629 [2024-04-18 09:52:31.772621] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:08.629 [2024-04-18 09:52:31.772828] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.jUoN3zF1yl 00:24:08.629 request: 00:24:08.629 { 00:24:08.629 "name": "TLSTEST", 00:24:08.629 "trtype": "tcp", 00:24:08.629 "traddr": "10.0.0.2", 00:24:08.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.629 "adrfam": "ipv4", 00:24:08.629 "trsvcid": "4420", 00:24:08.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.629 "psk": "/tmp/tmp.jUoN3zF1yl", 00:24:08.629 "method": "bdev_nvme_attach_controller", 00:24:08.629 "req_id": 1 00:24:08.629 } 00:24:08.629 Got JSON-RPC error response 00:24:08.629 response: 00:24:08.629 { 00:24:08.629 "code": -1, 00:24:08.629 "message": "Operation not permitted" 00:24:08.629 } 00:24:08.629 09:52:31 -- target/tls.sh@36 -- # killprocess 73423 00:24:08.629 09:52:31 -- common/autotest_common.sh@936 -- # '[' -z 73423 ']' 00:24:08.629 09:52:31 -- common/autotest_common.sh@940 -- # kill -0 73423 00:24:08.629 09:52:31 -- common/autotest_common.sh@941 -- # uname 00:24:08.629 09:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.629 09:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73423 00:24:08.629 killing process with pid 73423 00:24:08.629 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.629 00:24:08.629 Latency(us) 00:24:08.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.629 =================================================================================================================== 00:24:08.629 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.629 09:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:08.629 09:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:08.629 09:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73423' 00:24:08.629 09:52:31 -- common/autotest_common.sh@955 -- # kill 73423 00:24:08.629 09:52:31 -- common/autotest_common.sh@960 -- # wait 73423 00:24:10.007 09:52:32 -- target/tls.sh@37 -- # return 1 00:24:10.007 09:52:32 -- common/autotest_common.sh@641 -- # es=1 00:24:10.007 09:52:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:10.007 09:52:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:10.007 09:52:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:10.007 09:52:32 -- target/tls.sh@174 -- # killprocess 73226 00:24:10.007 09:52:32 -- common/autotest_common.sh@936 -- # '[' -z 73226 ']' 00:24:10.007 09:52:32 -- common/autotest_common.sh@940 -- # kill -0 73226 00:24:10.007 09:52:32 -- common/autotest_common.sh@941 -- # uname 00:24:10.007 09:52:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:10.007 09:52:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73226 00:24:10.007 killing process with pid 73226 00:24:10.007 09:52:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:10.007 09:52:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:10.007 09:52:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73226' 00:24:10.007 09:52:33 -- common/autotest_common.sh@955 -- # kill 73226 00:24:10.007 [2024-04-18 09:52:33.002076] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:10.007 09:52:33 -- common/autotest_common.sh@960 -- # wait 73226 00:24:10.944 09:52:34 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:10.945 09:52:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:10.945 09:52:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:10.945 09:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:10.945 09:52:34 -- nvmf/common.sh@470 -- # nvmfpid=73475 00:24:10.945 09:52:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.945 09:52:34 -- nvmf/common.sh@471 -- # waitforlisten 73475 00:24:10.945 09:52:34 -- common/autotest_common.sh@817 -- # '[' -z 73475 ']' 00:24:10.945 09:52:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.945 09:52:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:10.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.945 09:52:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.945 09:52:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:10.945 09:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.203 [2024-04-18 09:52:34.390052] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:11.203 [2024-04-18 09:52:34.390266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.203 [2024-04-18 09:52:34.568567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.462 [2024-04-18 09:52:34.843458] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.462 [2024-04-18 09:52:34.843541] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.462 [2024-04-18 09:52:34.843564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.462 [2024-04-18 09:52:34.843595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.462 [2024-04-18 09:52:34.843611] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.462 [2024-04-18 09:52:34.843655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.029 09:52:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:12.029 09:52:35 -- common/autotest_common.sh@850 -- # return 0 00:24:12.029 09:52:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:12.029 09:52:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:12.029 09:52:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.029 09:52:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.029 09:52:35 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.jUoN3zF1yl 00:24:12.029 09:52:35 -- common/autotest_common.sh@638 -- # local es=0 00:24:12.029 09:52:35 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jUoN3zF1yl 00:24:12.029 09:52:35 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:24:12.029 09:52:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:12.029 09:52:35 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:24:12.029 09:52:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:12.029 09:52:35 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.jUoN3zF1yl 00:24:12.029 09:52:35 -- target/tls.sh@49 -- # local key=/tmp/tmp.jUoN3zF1yl 00:24:12.029 09:52:35 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:12.288 [2024-04-18 09:52:35.597328] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.288 09:52:35 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:12.547 09:52:35 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:12.805 [2024-04-18 09:52:36.137491] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.805 [2024-04-18 09:52:36.137803] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.805 09:52:36 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:13.064 malloc0 00:24:13.064 09:52:36 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:13.631 09:52:36 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:24:13.631 [2024-04-18 09:52:36.954659] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:13.631 [2024-04-18 09:52:36.954729] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:13.631 [2024-04-18 09:52:36.954773] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:24:13.631 request: 00:24:13.631 { 00:24:13.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.631 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.631 "psk": "/tmp/tmp.jUoN3zF1yl", 00:24:13.631 "method": "nvmf_subsystem_add_host", 00:24:13.631 "req_id": 1 00:24:13.631 } 00:24:13.631 Got JSON-RPC error response 00:24:13.631 response: 00:24:13.631 { 00:24:13.631 "code": -32603, 00:24:13.631 "message": "Internal error" 00:24:13.631 } 00:24:13.631 09:52:36 -- common/autotest_common.sh@641 -- # es=1 00:24:13.631 09:52:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:13.631 09:52:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:13.631 09:52:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:13.631 09:52:36 -- target/tls.sh@180 -- # killprocess 73475 00:24:13.631 09:52:36 -- common/autotest_common.sh@936 -- # '[' -z 73475 ']' 00:24:13.631 09:52:36 -- common/autotest_common.sh@940 -- # kill -0 73475 00:24:13.631 09:52:36 -- common/autotest_common.sh@941 -- # uname 00:24:13.631 09:52:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:13.631 09:52:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73475 00:24:13.631 killing process with pid 73475 00:24:13.631 09:52:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:13.631 09:52:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:13.631 09:52:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73475' 00:24:13.631 09:52:36 -- common/autotest_common.sh@955 -- # kill 73475 00:24:13.631 09:52:36 -- common/autotest_common.sh@960 -- # wait 73475 00:24:15.078 09:52:38 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.jUoN3zF1yl 00:24:15.078 09:52:38 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:15.078 09:52:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:15.078 09:52:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:15.078 09:52:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.078 09:52:38 -- nvmf/common.sh@470 -- # nvmfpid=73550 00:24:15.078 09:52:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.078 09:52:38 -- nvmf/common.sh@471 -- # waitforlisten 73550 00:24:15.078 09:52:38 -- common/autotest_common.sh@817 -- # '[' -z 73550 ']' 00:24:15.078 09:52:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.078 09:52:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:15.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.078 09:52:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.078 09:52:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:15.078 09:52:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.078 [2024-04-18 09:52:38.338258] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:15.078 [2024-04-18 09:52:38.338425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.356 [2024-04-18 09:52:38.505873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.356 [2024-04-18 09:52:38.743563] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.356 [2024-04-18 09:52:38.743620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.356 [2024-04-18 09:52:38.743644] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.356 [2024-04-18 09:52:38.743670] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.356 [2024-04-18 09:52:38.743685] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.356 [2024-04-18 09:52:38.743729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.924 09:52:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:15.924 09:52:39 -- common/autotest_common.sh@850 -- # return 0 00:24:15.924 09:52:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:15.924 09:52:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:15.924 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.924 09:52:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.924 09:52:39 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.jUoN3zF1yl 00:24:15.924 09:52:39 -- target/tls.sh@49 -- # local key=/tmp/tmp.jUoN3zF1yl 00:24:15.924 09:52:39 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.183 [2024-04-18 09:52:39.572249] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.441 09:52:39 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:16.441 09:52:39 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.700 [2024-04-18 09:52:40.028374] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.700 [2024-04-18 09:52:40.028672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.700 09:52:40 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:16.958 malloc0 00:24:17.217 09:52:40 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:17.475 09:52:40 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:24:17.475 [2024-04-18 09:52:40.829469] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:17.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.475 09:52:40 -- target/tls.sh@188 -- # bdevperf_pid=73605 00:24:17.475 09:52:40 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.475 09:52:40 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.475 09:52:40 -- target/tls.sh@191 -- # waitforlisten 73605 /var/tmp/bdevperf.sock 00:24:17.475 09:52:40 -- common/autotest_common.sh@817 -- # '[' -z 73605 ']' 00:24:17.475 09:52:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.475 09:52:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:17.475 09:52:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.475 09:52:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:17.475 09:52:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.733 [2024-04-18 09:52:40.935306] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:17.733 [2024-04-18 09:52:40.935686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73605 ] 00:24:17.733 [2024-04-18 09:52:41.101702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.300 [2024-04-18 09:52:41.403299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.557 09:52:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:18.557 09:52:41 -- common/autotest_common.sh@850 -- # return 0 00:24:18.557 09:52:41 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:24:18.815 [2024-04-18 09:52:42.059285] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.815 [2024-04-18 09:52:42.059474] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:18.815 TLSTESTn1 00:24:18.815 09:52:42 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:19.074 09:52:42 -- target/tls.sh@196 -- # tgtconf='{ 00:24:19.074 "subsystems": [ 00:24:19.074 { 00:24:19.074 "subsystem": "keyring", 00:24:19.074 "config": [] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "iobuf", 00:24:19.074 "config": [ 00:24:19.074 { 00:24:19.074 "method": "iobuf_set_options", 00:24:19.074 "params": { 00:24:19.074 "small_pool_count": 8192, 00:24:19.074 "large_pool_count": 1024, 00:24:19.074 "small_bufsize": 8192, 00:24:19.074 "large_bufsize": 135168 00:24:19.074 } 00:24:19.074 } 00:24:19.074 ] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "sock", 00:24:19.074 "config": [ 00:24:19.074 { 00:24:19.074 "method": "sock_impl_set_options", 00:24:19.074 "params": { 00:24:19.074 "impl_name": "uring", 00:24:19.074 "recv_buf_size": 2097152, 00:24:19.074 "send_buf_size": 2097152, 00:24:19.074 "enable_recv_pipe": true, 00:24:19.074 "enable_quickack": false, 00:24:19.074 "enable_placement_id": 0, 00:24:19.074 "enable_zerocopy_send_server": false, 00:24:19.074 "enable_zerocopy_send_client": false, 00:24:19.074 "zerocopy_threshold": 0, 00:24:19.074 "tls_version": 0, 00:24:19.074 "enable_ktls": false 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "sock_impl_set_options", 00:24:19.074 "params": { 00:24:19.074 "impl_name": "posix", 00:24:19.074 "recv_buf_size": 2097152, 00:24:19.074 "send_buf_size": 2097152, 00:24:19.074 "enable_recv_pipe": true, 00:24:19.074 "enable_quickack": false, 00:24:19.074 "enable_placement_id": 0, 00:24:19.074 "enable_zerocopy_send_server": true, 00:24:19.074 "enable_zerocopy_send_client": false, 00:24:19.074 "zerocopy_threshold": 0, 00:24:19.074 "tls_version": 0, 00:24:19.074 "enable_ktls": false 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "sock_impl_set_options", 00:24:19.074 "params": { 00:24:19.074 "impl_name": "ssl", 00:24:19.074 "recv_buf_size": 4096, 00:24:19.074 "send_buf_size": 4096, 00:24:19.074 "enable_recv_pipe": true, 00:24:19.074 "enable_quickack": false, 00:24:19.074 "enable_placement_id": 0, 00:24:19.074 "enable_zerocopy_send_server": true, 00:24:19.074 "enable_zerocopy_send_client": false, 00:24:19.074 "zerocopy_threshold": 0, 00:24:19.074 "tls_version": 0, 00:24:19.074 "enable_ktls": false 00:24:19.074 } 00:24:19.074 } 00:24:19.074 ] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "vmd", 00:24:19.074 "config": [] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "accel", 00:24:19.074 "config": [ 00:24:19.074 { 00:24:19.074 "method": "accel_set_options", 00:24:19.074 "params": { 00:24:19.074 "small_cache_size": 128, 00:24:19.074 "large_cache_size": 16, 00:24:19.074 "task_count": 2048, 00:24:19.074 "sequence_count": 2048, 00:24:19.074 "buf_count": 2048 00:24:19.074 } 00:24:19.074 } 00:24:19.074 ] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "bdev", 00:24:19.074 "config": [ 00:24:19.074 { 00:24:19.074 "method": "bdev_set_options", 00:24:19.074 "params": { 00:24:19.074 "bdev_io_pool_size": 65535, 00:24:19.074 "bdev_io_cache_size": 256, 00:24:19.074 "bdev_auto_examine": true, 00:24:19.074 "iobuf_small_cache_size": 128, 00:24:19.074 "iobuf_large_cache_size": 16 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "bdev_raid_set_options", 00:24:19.074 "params": { 00:24:19.074 "process_window_size_kb": 1024 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "bdev_iscsi_set_options", 00:24:19.074 "params": { 00:24:19.074 "timeout_sec": 30 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "bdev_nvme_set_options", 00:24:19.074 "params": { 00:24:19.074 "action_on_timeout": "none", 00:24:19.074 "timeout_us": 0, 00:24:19.074 "timeout_admin_us": 0, 00:24:19.074 "keep_alive_timeout_ms": 10000, 00:24:19.074 "arbitration_burst": 0, 00:24:19.074 "low_priority_weight": 0, 00:24:19.074 "medium_priority_weight": 0, 00:24:19.074 "high_priority_weight": 0, 00:24:19.074 "nvme_adminq_poll_period_us": 10000, 00:24:19.074 "nvme_ioq_poll_period_us": 0, 00:24:19.074 "io_queue_requests": 0, 00:24:19.074 "delay_cmd_submit": true, 00:24:19.074 "transport_retry_count": 4, 00:24:19.074 "bdev_retry_count": 3, 00:24:19.074 "transport_ack_timeout": 0, 00:24:19.074 "ctrlr_loss_timeout_sec": 0, 00:24:19.074 "reconnect_delay_sec": 0, 00:24:19.074 "fast_io_fail_timeout_sec": 0, 00:24:19.074 "disable_auto_failback": false, 00:24:19.074 "generate_uuids": false, 00:24:19.074 "transport_tos": 0, 00:24:19.074 "nvme_error_stat": false, 00:24:19.074 "rdma_srq_size": 0, 00:24:19.074 "io_path_stat": false, 00:24:19.074 "allow_accel_sequence": false, 00:24:19.074 "rdma_max_cq_size": 0, 00:24:19.074 "rdma_cm_event_timeout_ms": 0, 00:24:19.074 "dhchap_digests": [ 00:24:19.074 "sha256", 00:24:19.074 "sha384", 00:24:19.074 "sha512" 00:24:19.074 ], 00:24:19.074 "dhchap_dhgroups": [ 00:24:19.074 "null", 00:24:19.074 "ffdhe2048", 00:24:19.074 "ffdhe3072", 00:24:19.074 "ffdhe4096", 00:24:19.074 "ffdhe6144", 00:24:19.074 "ffdhe8192" 00:24:19.074 ] 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "bdev_nvme_set_hotplug", 00:24:19.074 "params": { 00:24:19.074 "period_us": 100000, 00:24:19.074 "enable": false 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "bdev_malloc_create", 00:24:19.074 "params": { 00:24:19.074 "name": "malloc0", 00:24:19.074 "num_blocks": 8192, 00:24:19.074 "block_size": 4096, 00:24:19.074 "physical_block_size": 4096, 00:24:19.074 "uuid": "45e748b0-b3e7-4325-9360-afea5298204d", 00:24:19.074 "optimal_io_boundary": 0 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "bdev_wait_for_examine" 00:24:19.074 } 00:24:19.074 ] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "nbd", 00:24:19.074 "config": [] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "scheduler", 00:24:19.074 "config": [ 00:24:19.074 { 00:24:19.074 "method": "framework_set_scheduler", 00:24:19.074 "params": { 00:24:19.074 "name": "static" 00:24:19.074 } 00:24:19.074 } 00:24:19.074 ] 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "subsystem": "nvmf", 00:24:19.074 "config": [ 00:24:19.074 { 00:24:19.074 "method": "nvmf_set_config", 00:24:19.074 "params": { 00:24:19.074 "discovery_filter": "match_any", 00:24:19.074 "admin_cmd_passthru": { 00:24:19.074 "identify_ctrlr": false 00:24:19.074 } 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "nvmf_set_max_subsystems", 00:24:19.074 "params": { 00:24:19.074 "max_subsystems": 1024 00:24:19.074 } 00:24:19.074 }, 00:24:19.074 { 00:24:19.074 "method": "nvmf_set_crdt", 00:24:19.074 "params": { 00:24:19.074 "crdt1": 0, 00:24:19.074 "crdt2": 0, 00:24:19.074 "crdt3": 0 00:24:19.075 } 00:24:19.075 }, 00:24:19.075 { 00:24:19.075 "method": "nvmf_create_transport", 00:24:19.075 "params": { 00:24:19.075 "trtype": "TCP", 00:24:19.075 "max_queue_depth": 128, 00:24:19.075 "max_io_qpairs_per_ctrlr": 127, 00:24:19.075 "in_capsule_data_size": 4096, 00:24:19.075 "max_io_size": 131072, 00:24:19.075 "io_unit_size": 131072, 00:24:19.075 "max_aq_depth": 128, 00:24:19.075 "num_shared_buffers": 511, 00:24:19.075 "buf_cache_size": 4294967295, 00:24:19.075 "dif_insert_or_strip": false, 00:24:19.075 "zcopy": false, 00:24:19.075 "c2h_success": false, 00:24:19.075 "sock_priority": 0, 00:24:19.075 "abort_timeout_sec": 1, 00:24:19.075 "ack_timeout": 0 00:24:19.075 } 00:24:19.075 }, 00:24:19.075 { 00:24:19.075 "method": "nvmf_create_subsystem", 00:24:19.075 "params": { 00:24:19.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.075 "allow_any_host": false, 00:24:19.075 "serial_number": "SPDK00000000000001", 00:24:19.075 "model_number": "SPDK bdev Controller", 00:24:19.075 "max_namespaces": 10, 00:24:19.075 "min_cntlid": 1, 00:24:19.075 "max_cntlid": 65519, 00:24:19.075 "ana_reporting": false 00:24:19.075 } 00:24:19.075 }, 00:24:19.075 { 00:24:19.075 "method": "nvmf_subsystem_add_host", 00:24:19.075 "params": { 00:24:19.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.075 "host": "nqn.2016-06.io.spdk:host1", 00:24:19.075 "psk": "/tmp/tmp.jUoN3zF1yl" 00:24:19.075 } 00:24:19.075 }, 00:24:19.075 { 00:24:19.075 "method": "nvmf_subsystem_add_ns", 00:24:19.075 "params": { 00:24:19.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.075 "namespace": { 00:24:19.075 "nsid": 1, 00:24:19.075 "bdev_name": "malloc0", 00:24:19.075 "nguid": "45E748B0B3E743259360AFEA5298204D", 00:24:19.075 "uuid": "45e748b0-b3e7-4325-9360-afea5298204d", 00:24:19.075 "no_auto_visible": false 00:24:19.075 } 00:24:19.075 } 00:24:19.075 }, 00:24:19.075 { 00:24:19.075 "method": "nvmf_subsystem_add_listener", 00:24:19.075 "params": { 00:24:19.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.075 "listen_address": { 00:24:19.075 "trtype": "TCP", 00:24:19.075 "adrfam": "IPv4", 00:24:19.075 "traddr": "10.0.0.2", 00:24:19.075 "trsvcid": "4420" 00:24:19.075 }, 00:24:19.075 "secure_channel": true 00:24:19.075 } 00:24:19.075 } 00:24:19.075 ] 00:24:19.075 } 00:24:19.075 ] 00:24:19.075 }' 00:24:19.075 09:52:42 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:19.641 09:52:42 -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:19.641 "subsystems": [ 00:24:19.641 { 00:24:19.641 "subsystem": "keyring", 00:24:19.641 "config": [] 00:24:19.641 }, 00:24:19.641 { 00:24:19.641 "subsystem": "iobuf", 00:24:19.641 "config": [ 00:24:19.641 { 00:24:19.641 "method": "iobuf_set_options", 00:24:19.641 "params": { 00:24:19.641 "small_pool_count": 8192, 00:24:19.641 "large_pool_count": 1024, 00:24:19.641 "small_bufsize": 8192, 00:24:19.641 "large_bufsize": 135168 00:24:19.641 } 00:24:19.641 } 00:24:19.641 ] 00:24:19.641 }, 00:24:19.641 { 00:24:19.641 "subsystem": "sock", 00:24:19.641 "config": [ 00:24:19.641 { 00:24:19.641 "method": "sock_impl_set_options", 00:24:19.641 "params": { 00:24:19.641 "impl_name": "uring", 00:24:19.641 "recv_buf_size": 2097152, 00:24:19.641 "send_buf_size": 2097152, 00:24:19.641 "enable_recv_pipe": true, 00:24:19.641 "enable_quickack": false, 00:24:19.641 "enable_placement_id": 0, 00:24:19.641 "enable_zerocopy_send_server": false, 00:24:19.641 "enable_zerocopy_send_client": false, 00:24:19.641 "zerocopy_threshold": 0, 00:24:19.641 "tls_version": 0, 00:24:19.641 "enable_ktls": false 00:24:19.641 } 00:24:19.641 }, 00:24:19.641 { 00:24:19.641 "method": "sock_impl_set_options", 00:24:19.641 "params": { 00:24:19.641 "impl_name": "posix", 00:24:19.641 "recv_buf_size": 2097152, 00:24:19.641 "send_buf_size": 2097152, 00:24:19.641 "enable_recv_pipe": true, 00:24:19.641 "enable_quickack": false, 00:24:19.641 "enable_placement_id": 0, 00:24:19.641 "enable_zerocopy_send_server": true, 00:24:19.641 "enable_zerocopy_send_client": false, 00:24:19.641 "zerocopy_threshold": 0, 00:24:19.641 "tls_version": 0, 00:24:19.641 "enable_ktls": false 00:24:19.641 } 00:24:19.641 }, 00:24:19.641 { 00:24:19.641 "method": "sock_impl_set_options", 00:24:19.641 "params": { 00:24:19.641 "impl_name": "ssl", 00:24:19.641 "recv_buf_size": 4096, 00:24:19.641 "send_buf_size": 4096, 00:24:19.641 "enable_recv_pipe": true, 00:24:19.641 "enable_quickack": false, 00:24:19.641 "enable_placement_id": 0, 00:24:19.641 "enable_zerocopy_send_server": true, 00:24:19.641 "enable_zerocopy_send_client": false, 00:24:19.641 "zerocopy_threshold": 0, 00:24:19.641 "tls_version": 0, 00:24:19.641 "enable_ktls": false 00:24:19.641 } 00:24:19.641 } 00:24:19.641 ] 00:24:19.641 }, 00:24:19.641 { 00:24:19.641 "subsystem": "vmd", 00:24:19.641 "config": [] 00:24:19.641 }, 00:24:19.641 { 00:24:19.641 "subsystem": "accel", 00:24:19.641 "config": [ 00:24:19.641 { 00:24:19.641 "method": "accel_set_options", 00:24:19.641 "params": { 00:24:19.641 "small_cache_size": 128, 00:24:19.642 "large_cache_size": 16, 00:24:19.642 "task_count": 2048, 00:24:19.642 "sequence_count": 2048, 00:24:19.642 "buf_count": 2048 00:24:19.642 } 00:24:19.642 } 00:24:19.642 ] 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "subsystem": "bdev", 00:24:19.642 "config": [ 00:24:19.642 { 00:24:19.642 "method": "bdev_set_options", 00:24:19.642 "params": { 00:24:19.642 "bdev_io_pool_size": 65535, 00:24:19.642 "bdev_io_cache_size": 256, 00:24:19.642 "bdev_auto_examine": true, 00:24:19.642 "iobuf_small_cache_size": 128, 00:24:19.642 "iobuf_large_cache_size": 16 00:24:19.642 } 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "method": "bdev_raid_set_options", 00:24:19.642 "params": { 00:24:19.642 "process_window_size_kb": 1024 00:24:19.642 } 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "method": "bdev_iscsi_set_options", 00:24:19.642 "params": { 00:24:19.642 "timeout_sec": 30 00:24:19.642 } 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "method": "bdev_nvme_set_options", 00:24:19.642 "params": { 00:24:19.642 "action_on_timeout": "none", 00:24:19.642 "timeout_us": 0, 00:24:19.642 "timeout_admin_us": 0, 00:24:19.642 "keep_alive_timeout_ms": 10000, 00:24:19.642 "arbitration_burst": 0, 00:24:19.642 "low_priority_weight": 0, 00:24:19.642 "medium_priority_weight": 0, 00:24:19.642 "high_priority_weight": 0, 00:24:19.642 "nvme_adminq_poll_period_us": 10000, 00:24:19.642 "nvme_ioq_poll_period_us": 0, 00:24:19.642 "io_queue_requests": 512, 00:24:19.642 "delay_cmd_submit": true, 00:24:19.642 "transport_retry_count": 4, 00:24:19.642 "bdev_retry_count": 3, 00:24:19.642 "transport_ack_timeout": 0, 00:24:19.642 "ctrlr_loss_timeout_sec": 0, 00:24:19.642 "reconnect_delay_sec": 0, 00:24:19.642 "fast_io_fail_timeout_sec": 0, 00:24:19.642 "disable_auto_failback": false, 00:24:19.642 "generate_uuids": false, 00:24:19.642 "transport_tos": 0, 00:24:19.642 "nvme_error_stat": false, 00:24:19.642 "rdma_srq_size": 0, 00:24:19.642 "io_path_stat": false, 00:24:19.642 "allow_accel_sequence": false, 00:24:19.642 "rdma_max_cq_size": 0, 00:24:19.642 "rdma_cm_event_timeout_ms": 0, 00:24:19.642 "dhchap_digests": [ 00:24:19.642 "sha256", 00:24:19.642 "sha384", 00:24:19.642 "sha512" 00:24:19.642 ], 00:24:19.642 "dhchap_dhgroups": [ 00:24:19.642 "null", 00:24:19.642 "ffdhe2048", 00:24:19.642 "ffdhe3072", 00:24:19.642 "ffdhe4096", 00:24:19.642 "ffdhe6144", 00:24:19.642 "ffdhe8192" 00:24:19.642 ] 00:24:19.642 } 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "method": "bdev_nvme_attach_controller", 00:24:19.642 "params": { 00:24:19.642 "name": "TLSTEST", 00:24:19.642 "trtype": "TCP", 00:24:19.642 "adrfam": "IPv4", 00:24:19.642 "traddr": "10.0.0.2", 00:24:19.642 "trsvcid": "4420", 00:24:19.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.642 "prchk_reftag": false, 00:24:19.642 "prchk_guard": false, 00:24:19.642 "ctrlr_loss_timeout_sec": 0, 00:24:19.642 "reconnect_delay_sec": 0, 00:24:19.642 "fast_io_fail_timeout_sec": 0, 00:24:19.642 "psk": "/tmp/tmp.jUoN3zF1yl", 00:24:19.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.642 "hdgst": false, 00:24:19.642 "ddgst": false 00:24:19.642 } 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "method": "bdev_nvme_set_hotplug", 00:24:19.642 "params": { 00:24:19.642 "period_us": 100000, 00:24:19.642 "enable": false 00:24:19.642 } 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "method": "bdev_wait_for_examine" 00:24:19.642 } 00:24:19.642 ] 00:24:19.642 }, 00:24:19.642 { 00:24:19.642 "subsystem": "nbd", 00:24:19.642 "config": [] 00:24:19.642 } 00:24:19.642 ] 00:24:19.642 }' 00:24:19.642 09:52:42 -- target/tls.sh@199 -- # killprocess 73605 00:24:19.642 09:52:42 -- common/autotest_common.sh@936 -- # '[' -z 73605 ']' 00:24:19.642 09:52:42 -- common/autotest_common.sh@940 -- # kill -0 73605 00:24:19.642 09:52:42 -- common/autotest_common.sh@941 -- # uname 00:24:19.642 09:52:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:19.642 09:52:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73605 00:24:19.642 killing process with pid 73605 00:24:19.642 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.642 00:24:19.642 Latency(us) 00:24:19.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.642 =================================================================================================================== 00:24:19.642 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.642 09:52:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:19.642 09:52:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:19.642 09:52:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73605' 00:24:19.642 09:52:42 -- common/autotest_common.sh@955 -- # kill 73605 00:24:19.642 09:52:42 -- common/autotest_common.sh@960 -- # wait 73605 00:24:19.642 [2024-04-18 09:52:42.814395] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:21.014 09:52:43 -- target/tls.sh@200 -- # killprocess 73550 00:24:21.014 09:52:43 -- common/autotest_common.sh@936 -- # '[' -z 73550 ']' 00:24:21.014 09:52:43 -- common/autotest_common.sh@940 -- # kill -0 73550 00:24:21.014 09:52:43 -- common/autotest_common.sh@941 -- # uname 00:24:21.015 09:52:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:21.015 09:52:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73550 00:24:21.015 killing process with pid 73550 00:24:21.015 09:52:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:21.015 09:52:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:21.015 09:52:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73550' 00:24:21.015 09:52:43 -- common/autotest_common.sh@955 -- # kill 73550 00:24:21.015 09:52:43 -- common/autotest_common.sh@960 -- # wait 73550 00:24:21.015 [2024-04-18 09:52:43.996706] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:21.951 09:52:45 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:21.951 09:52:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:21.951 09:52:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:21.951 09:52:45 -- target/tls.sh@203 -- # echo '{ 00:24:21.951 "subsystems": [ 00:24:21.951 { 00:24:21.951 "subsystem": "keyring", 00:24:21.951 "config": [] 00:24:21.951 }, 00:24:21.952 { 00:24:21.952 "subsystem": "iobuf", 00:24:21.952 "config": [ 00:24:21.952 { 00:24:21.952 "method": "iobuf_set_options", 00:24:21.952 "params": { 00:24:21.952 "small_pool_count": 8192, 00:24:21.952 "large_pool_count": 1024, 00:24:21.952 "small_bufsize": 8192, 00:24:21.952 "large_bufsize": 135168 00:24:21.952 } 00:24:21.952 } 00:24:21.952 ] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "sock", 00:24:21.952 "config": [ 00:24:21.952 { 00:24:21.952 "method": "sock_impl_set_options", 00:24:21.952 "params": { 00:24:21.952 "impl_name": "uring", 00:24:21.952 "recv_buf_size": 2097152, 00:24:21.952 "send_buf_size": 2097152, 00:24:21.952 "enable_recv_pipe": true, 00:24:21.952 "enable_quickack": false, 00:24:21.952 "enable_placement_id": 0, 00:24:21.952 "enable_zerocopy_send_server": false, 00:24:21.952 "enable_zerocopy_send_client": false, 00:24:21.952 "zerocopy_threshold": 0, 00:24:21.952 "tls_version": 0, 00:24:21.952 "enable_ktls": false 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "sock_impl_set_options", 00:24:21.952 "params": { 00:24:21.952 "impl_name": "posix", 00:24:21.952 "recv_buf_size": 2097152, 00:24:21.952 "send_buf_size": 2097152, 00:24:21.952 "enable_recv_pipe": true, 00:24:21.952 "enable_quickack": false, 00:24:21.952 "enable_placement_id": 0, 00:24:21.952 "enable_zerocopy_send_server": true, 00:24:21.952 "enable_zerocopy_send_client": false, 00:24:21.952 "zerocopy_threshold": 0, 00:24:21.952 "tls_version": 0, 00:24:21.952 "enable_ktls": false 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "sock_impl_set_options", 00:24:21.952 "params": { 00:24:21.952 "impl_name": "ssl", 00:24:21.952 "recv_buf_size": 4096, 00:24:21.952 "send_buf_size": 4096, 00:24:21.952 "enable_recv_pipe": true, 00:24:21.952 "enable_quickack": false, 00:24:21.952 "enable_placement_id": 0, 00:24:21.952 "enable_zerocopy_send_server": true, 00:24:21.952 "enable_zerocopy_send_client": false, 00:24:21.952 "zerocopy_threshold": 0, 00:24:21.952 "tls_version": 0, 00:24:21.952 "enable_ktls": false 00:24:21.952 } 00:24:21.952 } 00:24:21.952 ] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "vmd", 00:24:21.952 "config": [] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "accel", 00:24:21.952 "config": [ 00:24:21.952 { 00:24:21.952 "method": "accel_set_options", 00:24:21.952 "params": { 00:24:21.952 "small_cache_size": 128, 00:24:21.952 "large_cache_size": 16, 00:24:21.952 "task_count": 2048, 00:24:21.952 "sequence_count": 2048, 00:24:21.952 "buf_count": 2048 00:24:21.952 } 00:24:21.952 } 00:24:21.952 ] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "bdev", 00:24:21.952 "config": [ 00:24:21.952 { 00:24:21.952 "method": "bdev_set_options", 00:24:21.952 "params": { 00:24:21.952 "bdev_io_pool_size": 65535, 00:24:21.952 "bdev_io_cache_size": 256, 00:24:21.952 "bdev_auto_examine": true, 00:24:21.952 "iobuf_small_cache_size": 128, 00:24:21.952 "iobuf_large_cache_size": 16 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "bdev_raid_set_options", 00:24:21.952 "params": { 00:24:21.952 "process_window_size_kb": 1024 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "bdev_iscsi_set_options", 00:24:21.952 "params": { 00:24:21.952 "timeout_sec": 30 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "bdev_nvme_set_options", 00:24:21.952 "params": { 00:24:21.952 "action_on_timeout": "none", 00:24:21.952 "timeout_us": 0, 00:24:21.952 "timeout_admin_us": 0, 00:24:21.952 "keep_alive_timeout_ms": 10000, 00:24:21.952 "arbitration_burst": 0, 00:24:21.952 "low_priority_weight": 0, 00:24:21.952 "medium_priority_weight": 0, 00:24:21.952 "high_priority_weight": 0, 00:24:21.952 "nvme_adminq_poll_period_us": 10000, 00:24:21.952 "nvme_ioq_poll_period_us": 0, 00:24:21.952 "io_queue_requests": 0, 00:24:21.952 "delay_cmd_submit": true, 00:24:21.952 "transport_retry_count": 4, 00:24:21.952 "bdev_retry_count": 3, 00:24:21.952 "transport_ack_timeout": 0, 00:24:21.952 "ctrlr_loss_timeout_sec": 0, 00:24:21.952 "reconnect_delay_sec": 0, 00:24:21.952 "fast_io_fail_timeout_sec": 0, 00:24:21.952 "disable_auto_failback": false, 00:24:21.952 "generate_uuids": false, 00:24:21.952 "transport_tos": 0, 00:24:21.952 "nvme_error_stat": false, 00:24:21.952 "rdma_srq_size": 0, 00:24:21.952 "io_path_stat": false, 00:24:21.952 "allow_accel_sequence": false, 00:24:21.952 "rdma_max_cq_size": 0, 00:24:21.952 "rdma_cm_event_timeout_ms": 0, 00:24:21.952 "dhchap_digests": [ 00:24:21.952 "sha256", 00:24:21.952 "sha384", 00:24:21.952 "sha512" 00:24:21.952 ], 00:24:21.952 "dhchap_dhgroups": [ 00:24:21.952 "null", 00:24:21.952 "ffdhe2048", 00:24:21.952 "ffdhe3072", 00:24:21.952 "ffdhe4096", 00:24:21.952 "ffdhe6144", 00:24:21.952 "ffdhe8192" 00:24:21.952 ] 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "bdev_nvme_set_hotplug", 00:24:21.952 "params": { 00:24:21.952 "period_us": 100000, 00:24:21.952 "enable": false 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "bdev_malloc_create", 00:24:21.952 "params": { 00:24:21.952 "name": "malloc0", 00:24:21.952 "num_blocks": 8192, 00:24:21.952 "block_size": 4096, 00:24:21.952 "physical_block_size": 4096, 00:24:21.952 "uuid": "45e748b0-b3e7-4325-9360-afea5298204d", 00:24:21.952 "optimal_io_boundary": 0 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "bdev_wait_for_examine" 00:24:21.952 } 00:24:21.952 ] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "nbd", 00:24:21.952 "config": [] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "scheduler", 00:24:21.952 "config": [ 00:24:21.952 { 00:24:21.952 "method": "framework_set_scheduler", 00:24:21.952 "params": { 00:24:21.952 "name": "static" 00:24:21.952 } 00:24:21.952 } 00:24:21.952 ] 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "subsystem": "nvmf", 00:24:21.952 "config": [ 00:24:21.952 { 00:24:21.952 "method": "nvmf_set_config", 00:24:21.952 "params": { 00:24:21.952 "discovery_filter": "match_any", 00:24:21.952 "admin_cmd_passthru": { 00:24:21.952 "identify_ctrlr": false 00:24:21.952 } 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "nvmf_set_max_subsystems", 00:24:21.952 "params": { 00:24:21.952 "max_subsystems": 1024 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "nvmf_set_crdt", 00:24:21.952 "params": { 00:24:21.952 "crdt1": 0, 00:24:21.952 "crdt2": 0, 00:24:21.952 "crdt3": 0 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "nvmf_create_transport", 00:24:21.952 "params": { 00:24:21.952 "trtype": "TCP", 00:24:21.952 "max_queue_depth": 128, 00:24:21.952 "max_io_qpairs_per_ctrlr": 127, 00:24:21.952 "in_capsule_data_size": 4096, 00:24:21.952 "max_io_size": 131072, 00:24:21.952 "io_unit_size": 131072, 00:24:21.952 "max_aq_depth": 128, 00:24:21.952 "num_shared_buffers": 511, 00:24:21.952 "buf_cache_size": 4294967295, 00:24:21.952 "dif_insert_or_strip": false, 00:24:21.952 "zcopy": false, 00:24:21.952 "c2h_success": false, 00:24:21.952 "sock_priority": 0, 00:24:21.952 "abort_timeout_sec": 1, 00:24:21.952 "ack_timeout": 0 00:24:21.952 } 00:24:21.952 }, 00:24:21.952 { 00:24:21.952 "method": "nvmf_create_subsystem", 00:24:21.952 "params": { 00:24:21.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.952 "allow_any_host": false, 00:24:21.953 "serial_number": "SPDK00000000000001", 00:24:21.953 "model_number": "SPDK bdev Controller", 00:24:21.953 "max_namespaces": 10, 00:24:21.953 "min_cntlid": 1, 00:24:21.953 "max_cntlid": 65519, 00:24:21.953 "ana_reporting": false 00:24:21.953 } 00:24:21.953 }, 00:24:21.953 { 00:24:21.953 "method": "nvmf_subsystem_add_host", 00:24:21.953 "params": { 00:24:21.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.953 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.953 "psk": "/tmp/tmp.jUoN3zF1yl" 00:24:21.953 } 00:24:21.953 }, 00:24:21.953 { 00:24:21.953 "method": "nvmf_subsystem_add_ns", 00:24:21.953 "params": { 00:24:21.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.953 "namespace": { 00:24:21.953 "nsid": 1, 00:24:21.953 "bdev_name": "malloc0", 00:24:21.953 "nguid": "45E748B0B3E743259360AFEA5298204D", 00:24:21.953 "uuid": "45e748b0-b3e7-4325-9360-afea5298204d", 00:24:21.953 "no_auto_visible": false 00:24:21.953 } 00:24:21.953 } 00:24:21.953 }, 00:24:21.953 { 00:24:21.953 "method": "nvmf_subsystem_add_listener", 00:24:21.953 "params": { 00:24:21.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.953 "listen_address": { 00:24:21.953 "trtype": "TCP", 00:24:21.953 "adrfam": "IPv4", 00:24:21.953 "traddr": "10.0.0.2", 00:24:21.953 "trsvcid": "4420" 00:24:21.953 }, 00:24:21.953 "secure_channel": true 00:24:21.953 } 00:24:21.953 } 00:24:21.953 ] 00:24:21.953 } 00:24:21.953 ] 00:24:21.953 }' 00:24:21.953 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:24:21.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.953 09:52:45 -- nvmf/common.sh@470 -- # nvmfpid=73672 00:24:21.953 09:52:45 -- nvmf/common.sh@471 -- # waitforlisten 73672 00:24:21.953 09:52:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:21.953 09:52:45 -- common/autotest_common.sh@817 -- # '[' -z 73672 ']' 00:24:21.953 09:52:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.953 09:52:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:21.953 09:52:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.953 09:52:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:21.953 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:24:22.212 [2024-04-18 09:52:45.412699] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:22.212 [2024-04-18 09:52:45.413180] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.212 [2024-04-18 09:52:45.594605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.472 [2024-04-18 09:52:45.836855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.472 [2024-04-18 09:52:45.837067] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.472 [2024-04-18 09:52:45.837263] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.472 [2024-04-18 09:52:45.837424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.472 [2024-04-18 09:52:45.837473] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.472 [2024-04-18 09:52:45.837732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.040 [2024-04-18 09:52:46.366263] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.040 [2024-04-18 09:52:46.382208] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:23.040 [2024-04-18 09:52:46.398211] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.040 [2024-04-18 09:52:46.406325] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.299 09:52:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:23.299 09:52:46 -- common/autotest_common.sh@850 -- # return 0 00:24:23.299 09:52:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:23.299 09:52:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:23.299 09:52:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.299 09:52:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.299 09:52:46 -- target/tls.sh@207 -- # bdevperf_pid=73704 00:24:23.299 09:52:46 -- target/tls.sh@208 -- # waitforlisten 73704 /var/tmp/bdevperf.sock 00:24:23.299 09:52:46 -- common/autotest_common.sh@817 -- # '[' -z 73704 ']' 00:24:23.299 09:52:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.299 09:52:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:23.299 09:52:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.299 09:52:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:23.299 09:52:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.299 09:52:46 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:23.299 09:52:46 -- target/tls.sh@204 -- # echo '{ 00:24:23.299 "subsystems": [ 00:24:23.299 { 00:24:23.299 "subsystem": "keyring", 00:24:23.299 "config": [] 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "subsystem": "iobuf", 00:24:23.299 "config": [ 00:24:23.299 { 00:24:23.299 "method": "iobuf_set_options", 00:24:23.299 "params": { 00:24:23.299 "small_pool_count": 8192, 00:24:23.299 "large_pool_count": 1024, 00:24:23.299 "small_bufsize": 8192, 00:24:23.299 "large_bufsize": 135168 00:24:23.299 } 00:24:23.299 } 00:24:23.299 ] 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "subsystem": "sock", 00:24:23.299 "config": [ 00:24:23.299 { 00:24:23.299 "method": "sock_impl_set_options", 00:24:23.299 "params": { 00:24:23.299 "impl_name": "uring", 00:24:23.299 "recv_buf_size": 2097152, 00:24:23.299 "send_buf_size": 2097152, 00:24:23.299 "enable_recv_pipe": true, 00:24:23.299 "enable_quickack": false, 00:24:23.299 "enable_placement_id": 0, 00:24:23.299 "enable_zerocopy_send_server": false, 00:24:23.299 "enable_zerocopy_send_client": false, 00:24:23.299 "zerocopy_threshold": 0, 00:24:23.299 "tls_version": 0, 00:24:23.299 "enable_ktls": false 00:24:23.299 } 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "method": "sock_impl_set_options", 00:24:23.299 "params": { 00:24:23.299 "impl_name": "posix", 00:24:23.299 "recv_buf_size": 2097152, 00:24:23.299 "send_buf_size": 2097152, 00:24:23.299 "enable_recv_pipe": true, 00:24:23.299 "enable_quickack": false, 00:24:23.299 "enable_placement_id": 0, 00:24:23.299 "enable_zerocopy_send_server": true, 00:24:23.299 "enable_zerocopy_send_client": false, 00:24:23.299 "zerocopy_threshold": 0, 00:24:23.299 "tls_version": 0, 00:24:23.299 "enable_ktls": false 00:24:23.299 } 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "method": "sock_impl_set_options", 00:24:23.299 "params": { 00:24:23.299 "impl_name": "ssl", 00:24:23.299 "recv_buf_size": 4096, 00:24:23.299 "send_buf_size": 4096, 00:24:23.299 "enable_recv_pipe": true, 00:24:23.299 "enable_quickack": false, 00:24:23.299 "enable_placement_id": 0, 00:24:23.299 "enable_zerocopy_send_server": true, 00:24:23.299 "enable_zerocopy_send_client": false, 00:24:23.299 "zerocopy_threshold": 0, 00:24:23.299 "tls_version": 0, 00:24:23.299 "enable_ktls": false 00:24:23.299 } 00:24:23.299 } 00:24:23.299 ] 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "subsystem": "vmd", 00:24:23.299 "config": [] 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "subsystem": "accel", 00:24:23.299 "config": [ 00:24:23.299 { 00:24:23.299 "method": "accel_set_options", 00:24:23.299 "params": { 00:24:23.299 "small_cache_size": 128, 00:24:23.299 "large_cache_size": 16, 00:24:23.299 "task_count": 2048, 00:24:23.299 "sequence_count": 2048, 00:24:23.299 "buf_count": 2048 00:24:23.299 } 00:24:23.299 } 00:24:23.299 ] 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "subsystem": "bdev", 00:24:23.299 "config": [ 00:24:23.299 { 00:24:23.299 "method": "bdev_set_options", 00:24:23.299 "params": { 00:24:23.299 "bdev_io_pool_size": 65535, 00:24:23.299 "bdev_io_cache_size": 256, 00:24:23.299 "bdev_auto_examine": true, 00:24:23.299 "iobuf_small_cache_size": 128, 00:24:23.299 "iobuf_large_cache_size": 16 00:24:23.299 } 00:24:23.299 }, 00:24:23.299 { 00:24:23.299 "method": "bdev_raid_set_options", 00:24:23.300 "params": { 00:24:23.300 "process_window_size_kb": 1024 00:24:23.300 } 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "method": "bdev_iscsi_set_options", 00:24:23.300 "params": { 00:24:23.300 "timeout_sec": 30 00:24:23.300 } 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "method": "bdev_nvme_set_options", 00:24:23.300 "params": { 00:24:23.300 "action_on_timeout": "none", 00:24:23.300 "timeout_us": 0, 00:24:23.300 "timeout_admin_us": 0, 00:24:23.300 "keep_alive_timeout_ms": 10000, 00:24:23.300 "arbitration_burst": 0, 00:24:23.300 "low_priority_weight": 0, 00:24:23.300 "medium_priority_weight": 0, 00:24:23.300 "high_priority_weight": 0, 00:24:23.300 "nvme_adminq_poll_period_us": 10000, 00:24:23.300 "nvme_ioq_poll_period_us": 0, 00:24:23.300 "io_queue_requests": 512, 00:24:23.300 "delay_cmd_submit": true, 00:24:23.300 "transport_retry_count": 4, 00:24:23.300 "bdev_retry_count": 3, 00:24:23.300 "transport_ack_timeout": 0, 00:24:23.300 "ctrlr_loss_timeout_sec": 0, 00:24:23.300 "reconnect_delay_sec": 0, 00:24:23.300 "fast_io_fail_timeout_sec": 0, 00:24:23.300 "disable_auto_failback": false, 00:24:23.300 "generate_uuids": false, 00:24:23.300 "transport_tos": 0, 00:24:23.300 "nvme_error_stat": false, 00:24:23.300 "rdma_srq_size": 0, 00:24:23.300 "io_path_stat": false, 00:24:23.300 "allow_accel_sequence": false, 00:24:23.300 "rdma_max_cq_size": 0, 00:24:23.300 "rdma_cm_event_timeout_ms": 0, 00:24:23.300 "dhchap_digests": [ 00:24:23.300 "sha256", 00:24:23.300 "sha384", 00:24:23.300 "sha512" 00:24:23.300 ], 00:24:23.300 "dhchap_dhgroups": [ 00:24:23.300 "null", 00:24:23.300 "ffdhe2048", 00:24:23.300 "ffdhe3072", 00:24:23.300 "ffdhe4096", 00:24:23.300 "ffdhe6144", 00:24:23.300 "ffdhe8192" 00:24:23.300 ] 00:24:23.300 } 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "method": "bdev_nvme_attach_controller", 00:24:23.300 "params": { 00:24:23.300 "name": "TLSTEST", 00:24:23.300 "trtype": "TCP", 00:24:23.300 "adrfam": "IPv4", 00:24:23.300 "traddr": "10.0.0.2", 00:24:23.300 "trsvcid": "4420", 00:24:23.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.300 "prchk_reftag": false, 00:24:23.300 "prchk_guard": false, 00:24:23.300 "ctrlr_loss_timeout_sec": 0, 00:24:23.300 "reconnect_delay_sec": 0, 00:24:23.300 "fast_io_fail_timeout_sec": 0, 00:24:23.300 "psk": "/tmp/tmp.jUoN3zF1yl", 00:24:23.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.300 "hdgst": false, 00:24:23.300 "ddgst": false 00:24:23.300 } 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "method": "bdev_nvme_set_hotplug", 00:24:23.300 "params": { 00:24:23.300 "period_us": 100000, 00:24:23.300 "enable": false 00:24:23.300 } 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "method": "bdev_wait_for_examine" 00:24:23.300 } 00:24:23.300 ] 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "subsystem": "nbd", 00:24:23.300 "config": [] 00:24:23.300 } 00:24:23.300 ] 00:24:23.300 }' 00:24:23.300 [2024-04-18 09:52:46.597891] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:23.300 [2024-04-18 09:52:46.598062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73704 ] 00:24:23.558 [2024-04-18 09:52:46.774887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.817 [2024-04-18 09:52:47.026293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.077 [2024-04-18 09:52:47.413044] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.077 [2024-04-18 09:52:47.413258] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:24.335 09:52:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:24.335 09:52:47 -- common/autotest_common.sh@850 -- # return 0 00:24:24.335 09:52:47 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:24.335 Running I/O for 10 seconds... 00:24:34.309 00:24:34.309 Latency(us) 00:24:34.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.309 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.309 Verification LBA range: start 0x0 length 0x2000 00:24:34.309 TLSTESTn1 : 10.04 2818.91 11.01 0.00 0.00 45307.93 12451.84 30146.56 00:24:34.309 =================================================================================================================== 00:24:34.309 Total : 2818.91 11.01 0.00 0.00 45307.93 12451.84 30146.56 00:24:34.309 0 00:24:34.309 09:52:57 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.309 09:52:57 -- target/tls.sh@214 -- # killprocess 73704 00:24:34.309 09:52:57 -- common/autotest_common.sh@936 -- # '[' -z 73704 ']' 00:24:34.309 09:52:57 -- common/autotest_common.sh@940 -- # kill -0 73704 00:24:34.568 09:52:57 -- common/autotest_common.sh@941 -- # uname 00:24:34.568 09:52:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.568 09:52:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73704 00:24:34.568 killing process with pid 73704 00:24:34.568 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.568 00:24:34.568 Latency(us) 00:24:34.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.568 =================================================================================================================== 00:24:34.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.568 09:52:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:34.568 09:52:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:34.568 09:52:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73704' 00:24:34.568 09:52:57 -- common/autotest_common.sh@955 -- # kill 73704 00:24:34.568 [2024-04-18 09:52:57.726825] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:34.568 09:52:57 -- common/autotest_common.sh@960 -- # wait 73704 00:24:35.942 09:52:58 -- target/tls.sh@215 -- # killprocess 73672 00:24:35.942 09:52:58 -- common/autotest_common.sh@936 -- # '[' -z 73672 ']' 00:24:35.942 09:52:58 -- common/autotest_common.sh@940 -- # kill -0 73672 00:24:35.942 09:52:58 -- common/autotest_common.sh@941 -- # uname 00:24:35.942 09:52:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:35.942 09:52:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73672 00:24:35.942 killing process with pid 73672 00:24:35.943 09:52:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:35.943 09:52:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:35.943 09:52:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73672' 00:24:35.943 09:52:58 -- common/autotest_common.sh@955 -- # kill 73672 00:24:35.943 [2024-04-18 09:52:58.935590] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:35.943 09:52:58 -- common/autotest_common.sh@960 -- # wait 73672 00:24:36.877 09:53:00 -- target/tls.sh@218 -- # nvmfappstart 00:24:36.877 09:53:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:36.877 09:53:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:36.877 09:53:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.877 09:53:00 -- nvmf/common.sh@470 -- # nvmfpid=73870 00:24:36.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.877 09:53:00 -- nvmf/common.sh@471 -- # waitforlisten 73870 00:24:36.877 09:53:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:36.877 09:53:00 -- common/autotest_common.sh@817 -- # '[' -z 73870 ']' 00:24:36.877 09:53:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.877 09:53:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:36.877 09:53:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.877 09:53:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:36.877 09:53:00 -- common/autotest_common.sh@10 -- # set +x 00:24:37.136 [2024-04-18 09:53:00.330435] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:37.136 [2024-04-18 09:53:00.330853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.136 [2024-04-18 09:53:00.510926] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.394 [2024-04-18 09:53:00.781655] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.394 [2024-04-18 09:53:00.781927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.394 [2024-04-18 09:53:00.782088] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.394 [2024-04-18 09:53:00.782433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.394 [2024-04-18 09:53:00.782486] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.394 [2024-04-18 09:53:00.782643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.961 09:53:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:37.961 09:53:01 -- common/autotest_common.sh@850 -- # return 0 00:24:37.961 09:53:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:37.961 09:53:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:37.961 09:53:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.961 09:53:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.961 09:53:01 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.jUoN3zF1yl 00:24:37.961 09:53:01 -- target/tls.sh@49 -- # local key=/tmp/tmp.jUoN3zF1yl 00:24:37.961 09:53:01 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.220 [2024-04-18 09:53:01.507757] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.220 09:53:01 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:38.479 09:53:01 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:38.737 [2024-04-18 09:53:01.964319] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.737 [2024-04-18 09:53:01.964627] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.737 09:53:01 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:38.996 malloc0 00:24:38.996 09:53:02 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:39.255 09:53:02 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jUoN3zF1yl 00:24:39.514 [2024-04-18 09:53:02.718483] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:39.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.514 09:53:02 -- target/tls.sh@222 -- # bdevperf_pid=73920 00:24:39.514 09:53:02 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:39.514 09:53:02 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.514 09:53:02 -- target/tls.sh@225 -- # waitforlisten 73920 /var/tmp/bdevperf.sock 00:24:39.514 09:53:02 -- common/autotest_common.sh@817 -- # '[' -z 73920 ']' 00:24:39.514 09:53:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.514 09:53:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.514 09:53:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.514 09:53:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.514 09:53:02 -- common/autotest_common.sh@10 -- # set +x 00:24:39.514 [2024-04-18 09:53:02.843037] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:39.514 [2024-04-18 09:53:02.843459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73920 ] 00:24:39.773 [2024-04-18 09:53:03.033341] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.032 [2024-04-18 09:53:03.327539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.600 09:53:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.600 09:53:03 -- common/autotest_common.sh@850 -- # return 0 00:24:40.600 09:53:03 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jUoN3zF1yl 00:24:40.600 09:53:03 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:40.858 [2024-04-18 09:53:04.115822] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.858 nvme0n1 00:24:40.858 09:53:04 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.117 Running I/O for 1 seconds... 00:24:42.062 00:24:42.062 Latency(us) 00:24:42.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.062 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:42.062 Verification LBA range: start 0x0 length 0x2000 00:24:42.062 nvme0n1 : 1.04 2670.16 10.43 0.00 0.00 47046.62 8877.15 27525.12 00:24:42.062 =================================================================================================================== 00:24:42.062 Total : 2670.16 10.43 0.00 0.00 47046.62 8877.15 27525.12 00:24:42.062 0 00:24:42.062 09:53:05 -- target/tls.sh@234 -- # killprocess 73920 00:24:42.062 09:53:05 -- common/autotest_common.sh@936 -- # '[' -z 73920 ']' 00:24:42.062 09:53:05 -- common/autotest_common.sh@940 -- # kill -0 73920 00:24:42.062 09:53:05 -- common/autotest_common.sh@941 -- # uname 00:24:42.062 09:53:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:42.062 09:53:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73920 00:24:42.062 killing process with pid 73920 00:24:42.062 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.062 00:24:42.062 Latency(us) 00:24:42.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.062 =================================================================================================================== 00:24:42.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.062 09:53:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:42.062 09:53:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:42.062 09:53:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73920' 00:24:42.062 09:53:05 -- common/autotest_common.sh@955 -- # kill 73920 00:24:42.062 09:53:05 -- common/autotest_common.sh@960 -- # wait 73920 00:24:43.456 09:53:06 -- target/tls.sh@235 -- # killprocess 73870 00:24:43.456 09:53:06 -- common/autotest_common.sh@936 -- # '[' -z 73870 ']' 00:24:43.456 09:53:06 -- common/autotest_common.sh@940 -- # kill -0 73870 00:24:43.456 09:53:06 -- common/autotest_common.sh@941 -- # uname 00:24:43.456 09:53:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.456 09:53:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73870 00:24:43.456 killing process with pid 73870 00:24:43.456 09:53:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:43.456 09:53:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:43.456 09:53:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73870' 00:24:43.456 09:53:06 -- common/autotest_common.sh@955 -- # kill 73870 00:24:43.456 [2024-04-18 09:53:06.630652] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:43.456 09:53:06 -- common/autotest_common.sh@960 -- # wait 73870 00:24:44.833 09:53:07 -- target/tls.sh@238 -- # nvmfappstart 00:24:44.833 09:53:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:44.833 09:53:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:44.833 09:53:07 -- common/autotest_common.sh@10 -- # set +x 00:24:44.833 09:53:07 -- nvmf/common.sh@470 -- # nvmfpid=73990 00:24:44.833 09:53:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:44.833 09:53:07 -- nvmf/common.sh@471 -- # waitforlisten 73990 00:24:44.833 09:53:07 -- common/autotest_common.sh@817 -- # '[' -z 73990 ']' 00:24:44.833 09:53:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.833 09:53:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.833 09:53:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.833 09:53:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.833 09:53:07 -- common/autotest_common.sh@10 -- # set +x 00:24:44.833 [2024-04-18 09:53:08.014259] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:44.833 [2024-04-18 09:53:08.014459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.833 [2024-04-18 09:53:08.192994] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.400 [2024-04-18 09:53:08.485259] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.401 [2024-04-18 09:53:08.485346] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.401 [2024-04-18 09:53:08.485367] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.401 [2024-04-18 09:53:08.485395] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.401 [2024-04-18 09:53:08.485411] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.401 [2024-04-18 09:53:08.485448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.660 09:53:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.660 09:53:08 -- common/autotest_common.sh@850 -- # return 0 00:24:45.660 09:53:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:45.660 09:53:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:45.660 09:53:08 -- common/autotest_common.sh@10 -- # set +x 00:24:45.660 09:53:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.660 09:53:08 -- target/tls.sh@239 -- # rpc_cmd 00:24:45.660 09:53:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.660 09:53:08 -- common/autotest_common.sh@10 -- # set +x 00:24:45.660 [2024-04-18 09:53:08.949022] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.660 malloc0 00:24:45.660 [2024-04-18 09:53:09.020031] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.660 [2024-04-18 09:53:09.020482] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.660 09:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.660 09:53:09 -- target/tls.sh@252 -- # bdevperf_pid=74028 00:24:45.660 09:53:09 -- target/tls.sh@254 -- # waitforlisten 74028 /var/tmp/bdevperf.sock 00:24:45.660 09:53:09 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:45.660 09:53:09 -- common/autotest_common.sh@817 -- # '[' -z 74028 ']' 00:24:45.660 09:53:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.660 09:53:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:45.660 09:53:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.660 09:53:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:45.660 09:53:09 -- common/autotest_common.sh@10 -- # set +x 00:24:45.919 [2024-04-18 09:53:09.155527] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:45.919 [2024-04-18 09:53:09.155969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74028 ] 00:24:46.178 [2024-04-18 09:53:09.334064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.436 [2024-04-18 09:53:09.626275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.695 09:53:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:46.695 09:53:10 -- common/autotest_common.sh@850 -- # return 0 00:24:46.695 09:53:10 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jUoN3zF1yl 00:24:46.953 09:53:10 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:47.211 [2024-04-18 09:53:10.491881] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.211 nvme0n1 00:24:47.212 09:53:10 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.471 Running I/O for 1 seconds... 00:24:48.408 00:24:48.408 Latency(us) 00:24:48.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:48.408 Verification LBA range: start 0x0 length 0x2000 00:24:48.408 nvme0n1 : 1.04 2426.47 9.48 0.00 0.00 51766.37 10426.18 35270.28 00:24:48.408 =================================================================================================================== 00:24:48.408 Total : 2426.47 9.48 0.00 0.00 51766.37 10426.18 35270.28 00:24:48.408 0 00:24:48.408 09:53:11 -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:48.408 09:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.408 09:53:11 -- common/autotest_common.sh@10 -- # set +x 00:24:48.667 09:53:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.667 09:53:11 -- target/tls.sh@263 -- # tgtcfg='{ 00:24:48.667 "subsystems": [ 00:24:48.667 { 00:24:48.667 "subsystem": "keyring", 00:24:48.667 "config": [ 00:24:48.667 { 00:24:48.667 "method": "keyring_file_add_key", 00:24:48.667 "params": { 00:24:48.667 "name": "key0", 00:24:48.667 "path": "/tmp/tmp.jUoN3zF1yl" 00:24:48.667 } 00:24:48.667 } 00:24:48.667 ] 00:24:48.667 }, 00:24:48.667 { 00:24:48.667 "subsystem": "iobuf", 00:24:48.667 "config": [ 00:24:48.667 { 00:24:48.667 "method": "iobuf_set_options", 00:24:48.667 "params": { 00:24:48.667 "small_pool_count": 8192, 00:24:48.667 "large_pool_count": 1024, 00:24:48.667 "small_bufsize": 8192, 00:24:48.667 "large_bufsize": 135168 00:24:48.667 } 00:24:48.667 } 00:24:48.667 ] 00:24:48.667 }, 00:24:48.667 { 00:24:48.667 "subsystem": "sock", 00:24:48.667 "config": [ 00:24:48.667 { 00:24:48.667 "method": "sock_impl_set_options", 00:24:48.667 "params": { 00:24:48.667 "impl_name": "uring", 00:24:48.667 "recv_buf_size": 2097152, 00:24:48.667 "send_buf_size": 2097152, 00:24:48.667 "enable_recv_pipe": true, 00:24:48.667 "enable_quickack": false, 00:24:48.667 "enable_placement_id": 0, 00:24:48.667 "enable_zerocopy_send_server": false, 00:24:48.667 "enable_zerocopy_send_client": false, 00:24:48.667 "zerocopy_threshold": 0, 00:24:48.667 "tls_version": 0, 00:24:48.667 "enable_ktls": false 00:24:48.667 } 00:24:48.667 }, 00:24:48.667 { 00:24:48.667 "method": "sock_impl_set_options", 00:24:48.667 "params": { 00:24:48.667 "impl_name": "posix", 00:24:48.667 "recv_buf_size": 2097152, 00:24:48.667 "send_buf_size": 2097152, 00:24:48.667 "enable_recv_pipe": true, 00:24:48.667 "enable_quickack": false, 00:24:48.667 "enable_placement_id": 0, 00:24:48.667 "enable_zerocopy_send_server": true, 00:24:48.667 "enable_zerocopy_send_client": false, 00:24:48.667 "zerocopy_threshold": 0, 00:24:48.667 "tls_version": 0, 00:24:48.667 "enable_ktls": false 00:24:48.667 } 00:24:48.667 }, 00:24:48.667 { 00:24:48.667 "method": "sock_impl_set_options", 00:24:48.667 "params": { 00:24:48.667 "impl_name": "ssl", 00:24:48.667 "recv_buf_size": 4096, 00:24:48.667 "send_buf_size": 4096, 00:24:48.667 "enable_recv_pipe": true, 00:24:48.667 "enable_quickack": false, 00:24:48.667 "enable_placement_id": 0, 00:24:48.667 "enable_zerocopy_send_server": true, 00:24:48.667 "enable_zerocopy_send_client": false, 00:24:48.667 "zerocopy_threshold": 0, 00:24:48.667 "tls_version": 0, 00:24:48.667 "enable_ktls": false 00:24:48.667 } 00:24:48.667 } 00:24:48.667 ] 00:24:48.667 }, 00:24:48.667 { 00:24:48.667 "subsystem": "vmd", 00:24:48.667 "config": [] 00:24:48.667 }, 00:24:48.667 { 00:24:48.667 "subsystem": "accel", 00:24:48.667 "config": [ 00:24:48.667 { 00:24:48.667 "method": "accel_set_options", 00:24:48.667 "params": { 00:24:48.667 "small_cache_size": 128, 00:24:48.667 "large_cache_size": 16, 00:24:48.668 "task_count": 2048, 00:24:48.668 "sequence_count": 2048, 00:24:48.668 "buf_count": 2048 00:24:48.668 } 00:24:48.668 } 00:24:48.668 ] 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "subsystem": "bdev", 00:24:48.668 "config": [ 00:24:48.668 { 00:24:48.668 "method": "bdev_set_options", 00:24:48.668 "params": { 00:24:48.668 "bdev_io_pool_size": 65535, 00:24:48.668 "bdev_io_cache_size": 256, 00:24:48.668 "bdev_auto_examine": true, 00:24:48.668 "iobuf_small_cache_size": 128, 00:24:48.668 "iobuf_large_cache_size": 16 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "bdev_raid_set_options", 00:24:48.668 "params": { 00:24:48.668 "process_window_size_kb": 1024 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "bdev_iscsi_set_options", 00:24:48.668 "params": { 00:24:48.668 "timeout_sec": 30 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "bdev_nvme_set_options", 00:24:48.668 "params": { 00:24:48.668 "action_on_timeout": "none", 00:24:48.668 "timeout_us": 0, 00:24:48.668 "timeout_admin_us": 0, 00:24:48.668 "keep_alive_timeout_ms": 10000, 00:24:48.668 "arbitration_burst": 0, 00:24:48.668 "low_priority_weight": 0, 00:24:48.668 "medium_priority_weight": 0, 00:24:48.668 "high_priority_weight": 0, 00:24:48.668 "nvme_adminq_poll_period_us": 10000, 00:24:48.668 "nvme_ioq_poll_period_us": 0, 00:24:48.668 "io_queue_requests": 0, 00:24:48.668 "delay_cmd_submit": true, 00:24:48.668 "transport_retry_count": 4, 00:24:48.668 "bdev_retry_count": 3, 00:24:48.668 "transport_ack_timeout": 0, 00:24:48.668 "ctrlr_loss_timeout_sec": 0, 00:24:48.668 "reconnect_delay_sec": 0, 00:24:48.668 "fast_io_fail_timeout_sec": 0, 00:24:48.668 "disable_auto_failback": false, 00:24:48.668 "generate_uuids": false, 00:24:48.668 "transport_tos": 0, 00:24:48.668 "nvme_error_stat": false, 00:24:48.668 "rdma_srq_size": 0, 00:24:48.668 "io_path_stat": false, 00:24:48.668 "allow_accel_sequence": false, 00:24:48.668 "rdma_max_cq_size": 0, 00:24:48.668 "rdma_cm_event_timeout_ms": 0, 00:24:48.668 "dhchap_digests": [ 00:24:48.668 "sha256", 00:24:48.668 "sha384", 00:24:48.668 "sha512" 00:24:48.668 ], 00:24:48.668 "dhchap_dhgroups": [ 00:24:48.668 "null", 00:24:48.668 "ffdhe2048", 00:24:48.668 "ffdhe3072", 00:24:48.668 "ffdhe4096", 00:24:48.668 "ffdhe6144", 00:24:48.668 "ffdhe8192" 00:24:48.668 ] 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "bdev_nvme_set_hotplug", 00:24:48.668 "params": { 00:24:48.668 "period_us": 100000, 00:24:48.668 "enable": false 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "bdev_malloc_create", 00:24:48.668 "params": { 00:24:48.668 "name": "malloc0", 00:24:48.668 "num_blocks": 8192, 00:24:48.668 "block_size": 4096, 00:24:48.668 "physical_block_size": 4096, 00:24:48.668 "uuid": "f708913a-6acc-4ddf-b1b9-f1db7360e53d", 00:24:48.668 "optimal_io_boundary": 0 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "bdev_wait_for_examine" 00:24:48.668 } 00:24:48.668 ] 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "subsystem": "nbd", 00:24:48.668 "config": [] 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "subsystem": "scheduler", 00:24:48.668 "config": [ 00:24:48.668 { 00:24:48.668 "method": "framework_set_scheduler", 00:24:48.668 "params": { 00:24:48.668 "name": "static" 00:24:48.668 } 00:24:48.668 } 00:24:48.668 ] 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "subsystem": "nvmf", 00:24:48.668 "config": [ 00:24:48.668 { 00:24:48.668 "method": "nvmf_set_config", 00:24:48.668 "params": { 00:24:48.668 "discovery_filter": "match_any", 00:24:48.668 "admin_cmd_passthru": { 00:24:48.668 "identify_ctrlr": false 00:24:48.668 } 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_set_max_subsystems", 00:24:48.668 "params": { 00:24:48.668 "max_subsystems": 1024 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_set_crdt", 00:24:48.668 "params": { 00:24:48.668 "crdt1": 0, 00:24:48.668 "crdt2": 0, 00:24:48.668 "crdt3": 0 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_create_transport", 00:24:48.668 "params": { 00:24:48.668 "trtype": "TCP", 00:24:48.668 "max_queue_depth": 128, 00:24:48.668 "max_io_qpairs_per_ctrlr": 127, 00:24:48.668 "in_capsule_data_size": 4096, 00:24:48.668 "max_io_size": 131072, 00:24:48.668 "io_unit_size": 131072, 00:24:48.668 "max_aq_depth": 128, 00:24:48.668 "num_shared_buffers": 511, 00:24:48.668 "buf_cache_size": 4294967295, 00:24:48.668 "dif_insert_or_strip": false, 00:24:48.668 "zcopy": false, 00:24:48.668 "c2h_success": false, 00:24:48.668 "sock_priority": 0, 00:24:48.668 "abort_timeout_sec": 1, 00:24:48.668 "ack_timeout": 0 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_create_subsystem", 00:24:48.668 "params": { 00:24:48.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.668 "allow_any_host": false, 00:24:48.668 "serial_number": "00000000000000000000", 00:24:48.668 "model_number": "SPDK bdev Controller", 00:24:48.668 "max_namespaces": 32, 00:24:48.668 "min_cntlid": 1, 00:24:48.668 "max_cntlid": 65519, 00:24:48.668 "ana_reporting": false 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_subsystem_add_host", 00:24:48.668 "params": { 00:24:48.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.668 "host": "nqn.2016-06.io.spdk:host1", 00:24:48.668 "psk": "key0" 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_subsystem_add_ns", 00:24:48.668 "params": { 00:24:48.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.668 "namespace": { 00:24:48.668 "nsid": 1, 00:24:48.668 "bdev_name": "malloc0", 00:24:48.668 "nguid": "F708913A6ACC4DDFB1B9F1DB7360E53D", 00:24:48.668 "uuid": "f708913a-6acc-4ddf-b1b9-f1db7360e53d", 00:24:48.668 "no_auto_visible": false 00:24:48.668 } 00:24:48.668 } 00:24:48.668 }, 00:24:48.668 { 00:24:48.668 "method": "nvmf_subsystem_add_listener", 00:24:48.668 "params": { 00:24:48.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.668 "listen_address": { 00:24:48.668 "trtype": "TCP", 00:24:48.668 "adrfam": "IPv4", 00:24:48.668 "traddr": "10.0.0.2", 00:24:48.668 "trsvcid": "4420" 00:24:48.668 }, 00:24:48.668 "secure_channel": true 00:24:48.668 } 00:24:48.668 } 00:24:48.668 ] 00:24:48.668 } 00:24:48.668 ] 00:24:48.668 }' 00:24:48.668 09:53:11 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:48.927 09:53:12 -- target/tls.sh@264 -- # bperfcfg='{ 00:24:48.927 "subsystems": [ 00:24:48.927 { 00:24:48.927 "subsystem": "keyring", 00:24:48.927 "config": [ 00:24:48.927 { 00:24:48.927 "method": "keyring_file_add_key", 00:24:48.927 "params": { 00:24:48.927 "name": "key0", 00:24:48.927 "path": "/tmp/tmp.jUoN3zF1yl" 00:24:48.927 } 00:24:48.927 } 00:24:48.927 ] 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "subsystem": "iobuf", 00:24:48.927 "config": [ 00:24:48.927 { 00:24:48.927 "method": "iobuf_set_options", 00:24:48.927 "params": { 00:24:48.927 "small_pool_count": 8192, 00:24:48.927 "large_pool_count": 1024, 00:24:48.927 "small_bufsize": 8192, 00:24:48.927 "large_bufsize": 135168 00:24:48.927 } 00:24:48.927 } 00:24:48.927 ] 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "subsystem": "sock", 00:24:48.927 "config": [ 00:24:48.927 { 00:24:48.927 "method": "sock_impl_set_options", 00:24:48.927 "params": { 00:24:48.927 "impl_name": "uring", 00:24:48.927 "recv_buf_size": 2097152, 00:24:48.927 "send_buf_size": 2097152, 00:24:48.927 "enable_recv_pipe": true, 00:24:48.927 "enable_quickack": false, 00:24:48.927 "enable_placement_id": 0, 00:24:48.927 "enable_zerocopy_send_server": false, 00:24:48.927 "enable_zerocopy_send_client": false, 00:24:48.927 "zerocopy_threshold": 0, 00:24:48.927 "tls_version": 0, 00:24:48.927 "enable_ktls": false 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "sock_impl_set_options", 00:24:48.927 "params": { 00:24:48.927 "impl_name": "posix", 00:24:48.927 "recv_buf_size": 2097152, 00:24:48.927 "send_buf_size": 2097152, 00:24:48.927 "enable_recv_pipe": true, 00:24:48.927 "enable_quickack": false, 00:24:48.927 "enable_placement_id": 0, 00:24:48.927 "enable_zerocopy_send_server": true, 00:24:48.927 "enable_zerocopy_send_client": false, 00:24:48.927 "zerocopy_threshold": 0, 00:24:48.927 "tls_version": 0, 00:24:48.927 "enable_ktls": false 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "sock_impl_set_options", 00:24:48.927 "params": { 00:24:48.927 "impl_name": "ssl", 00:24:48.927 "recv_buf_size": 4096, 00:24:48.927 "send_buf_size": 4096, 00:24:48.927 "enable_recv_pipe": true, 00:24:48.927 "enable_quickack": false, 00:24:48.927 "enable_placement_id": 0, 00:24:48.927 "enable_zerocopy_send_server": true, 00:24:48.927 "enable_zerocopy_send_client": false, 00:24:48.927 "zerocopy_threshold": 0, 00:24:48.927 "tls_version": 0, 00:24:48.927 "enable_ktls": false 00:24:48.927 } 00:24:48.927 } 00:24:48.927 ] 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "subsystem": "vmd", 00:24:48.927 "config": [] 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "subsystem": "accel", 00:24:48.927 "config": [ 00:24:48.927 { 00:24:48.927 "method": "accel_set_options", 00:24:48.927 "params": { 00:24:48.927 "small_cache_size": 128, 00:24:48.927 "large_cache_size": 16, 00:24:48.927 "task_count": 2048, 00:24:48.927 "sequence_count": 2048, 00:24:48.927 "buf_count": 2048 00:24:48.927 } 00:24:48.927 } 00:24:48.927 ] 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "subsystem": "bdev", 00:24:48.927 "config": [ 00:24:48.927 { 00:24:48.927 "method": "bdev_set_options", 00:24:48.927 "params": { 00:24:48.927 "bdev_io_pool_size": 65535, 00:24:48.927 "bdev_io_cache_size": 256, 00:24:48.927 "bdev_auto_examine": true, 00:24:48.927 "iobuf_small_cache_size": 128, 00:24:48.927 "iobuf_large_cache_size": 16 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_raid_set_options", 00:24:48.927 "params": { 00:24:48.927 "process_window_size_kb": 1024 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_iscsi_set_options", 00:24:48.927 "params": { 00:24:48.927 "timeout_sec": 30 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_nvme_set_options", 00:24:48.927 "params": { 00:24:48.927 "action_on_timeout": "none", 00:24:48.927 "timeout_us": 0, 00:24:48.927 "timeout_admin_us": 0, 00:24:48.927 "keep_alive_timeout_ms": 10000, 00:24:48.927 "arbitration_burst": 0, 00:24:48.927 "low_priority_weight": 0, 00:24:48.927 "medium_priority_weight": 0, 00:24:48.927 "high_priority_weight": 0, 00:24:48.927 "nvme_adminq_poll_period_us": 10000, 00:24:48.927 "nvme_ioq_poll_period_us": 0, 00:24:48.927 "io_queue_requests": 512, 00:24:48.927 "delay_cmd_submit": true, 00:24:48.927 "transport_retry_count": 4, 00:24:48.927 "bdev_retry_count": 3, 00:24:48.927 "transport_ack_timeout": 0, 00:24:48.927 "ctrlr_loss_timeout_sec": 0, 00:24:48.927 "reconnect_delay_sec": 0, 00:24:48.927 "fast_io_fail_timeout_sec": 0, 00:24:48.927 "disable_auto_failback": false, 00:24:48.927 "generate_uuids": false, 00:24:48.927 "transport_tos": 0, 00:24:48.927 "nvme_error_stat": false, 00:24:48.927 "rdma_srq_size": 0, 00:24:48.927 "io_path_stat": false, 00:24:48.927 "allow_accel_sequence": false, 00:24:48.927 "rdma_max_cq_size": 0, 00:24:48.927 "rdma_cm_event_timeout_ms": 0, 00:24:48.927 "dhchap_digests": [ 00:24:48.927 "sha256", 00:24:48.927 "sha384", 00:24:48.927 "sha512" 00:24:48.927 ], 00:24:48.927 "dhchap_dhgroups": [ 00:24:48.927 "null", 00:24:48.927 "ffdhe2048", 00:24:48.927 "ffdhe3072", 00:24:48.927 "ffdhe4096", 00:24:48.927 "ffdhe6144", 00:24:48.927 "ffdhe8192" 00:24:48.927 ] 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_nvme_attach_controller", 00:24:48.927 "params": { 00:24:48.927 "name": "nvme0", 00:24:48.927 "trtype": "TCP", 00:24:48.927 "adrfam": "IPv4", 00:24:48.927 "traddr": "10.0.0.2", 00:24:48.927 "trsvcid": "4420", 00:24:48.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.927 "prchk_reftag": false, 00:24:48.927 "prchk_guard": false, 00:24:48.927 "ctrlr_loss_timeout_sec": 0, 00:24:48.927 "reconnect_delay_sec": 0, 00:24:48.927 "fast_io_fail_timeout_sec": 0, 00:24:48.927 "psk": "key0", 00:24:48.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.927 "hdgst": false, 00:24:48.927 "ddgst": false 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_nvme_set_hotplug", 00:24:48.927 "params": { 00:24:48.927 "period_us": 100000, 00:24:48.927 "enable": false 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_enable_histogram", 00:24:48.927 "params": { 00:24:48.927 "name": "nvme0n1", 00:24:48.927 "enable": true 00:24:48.927 } 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "method": "bdev_wait_for_examine" 00:24:48.927 } 00:24:48.927 ] 00:24:48.927 }, 00:24:48.927 { 00:24:48.927 "subsystem": "nbd", 00:24:48.927 "config": [] 00:24:48.927 } 00:24:48.927 ] 00:24:48.927 }' 00:24:48.927 09:53:12 -- target/tls.sh@266 -- # killprocess 74028 00:24:48.927 09:53:12 -- common/autotest_common.sh@936 -- # '[' -z 74028 ']' 00:24:48.927 09:53:12 -- common/autotest_common.sh@940 -- # kill -0 74028 00:24:48.927 09:53:12 -- common/autotest_common.sh@941 -- # uname 00:24:48.927 09:53:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.927 09:53:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74028 00:24:48.927 killing process with pid 74028 00:24:48.927 Received shutdown signal, test time was about 1.000000 seconds 00:24:48.927 00:24:48.927 Latency(us) 00:24:48.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.928 =================================================================================================================== 00:24:48.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.928 09:53:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:48.928 09:53:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:48.928 09:53:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74028' 00:24:48.928 09:53:12 -- common/autotest_common.sh@955 -- # kill 74028 00:24:48.928 09:53:12 -- common/autotest_common.sh@960 -- # wait 74028 00:24:50.377 09:53:13 -- target/tls.sh@267 -- # killprocess 73990 00:24:50.377 09:53:13 -- common/autotest_common.sh@936 -- # '[' -z 73990 ']' 00:24:50.377 09:53:13 -- common/autotest_common.sh@940 -- # kill -0 73990 00:24:50.377 09:53:13 -- common/autotest_common.sh@941 -- # uname 00:24:50.377 09:53:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.377 09:53:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73990 00:24:50.377 killing process with pid 73990 00:24:50.377 09:53:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:50.378 09:53:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:50.378 09:53:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73990' 00:24:50.378 09:53:13 -- common/autotest_common.sh@955 -- # kill 73990 00:24:50.378 09:53:13 -- common/autotest_common.sh@960 -- # wait 73990 00:24:51.754 09:53:14 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:51.754 09:53:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:51.754 09:53:14 -- target/tls.sh@269 -- # echo '{ 00:24:51.754 "subsystems": [ 00:24:51.754 { 00:24:51.754 "subsystem": "keyring", 00:24:51.754 "config": [ 00:24:51.754 { 00:24:51.754 "method": "keyring_file_add_key", 00:24:51.754 "params": { 00:24:51.754 "name": "key0", 00:24:51.754 "path": "/tmp/tmp.jUoN3zF1yl" 00:24:51.754 } 00:24:51.754 } 00:24:51.754 ] 00:24:51.754 }, 00:24:51.754 { 00:24:51.754 "subsystem": "iobuf", 00:24:51.754 "config": [ 00:24:51.754 { 00:24:51.754 "method": "iobuf_set_options", 00:24:51.754 "params": { 00:24:51.754 "small_pool_count": 8192, 00:24:51.754 "large_pool_count": 1024, 00:24:51.754 "small_bufsize": 8192, 00:24:51.754 "large_bufsize": 135168 00:24:51.754 } 00:24:51.754 } 00:24:51.754 ] 00:24:51.754 }, 00:24:51.754 { 00:24:51.754 "subsystem": "sock", 00:24:51.754 "config": [ 00:24:51.754 { 00:24:51.754 "method": "sock_impl_set_options", 00:24:51.754 "params": { 00:24:51.754 "impl_name": "uring", 00:24:51.754 "recv_buf_size": 2097152, 00:24:51.754 "send_buf_size": 2097152, 00:24:51.754 "enable_recv_pipe": true, 00:24:51.754 "enable_quickack": false, 00:24:51.754 "enable_placement_id": 0, 00:24:51.754 "enable_zerocopy_send_server": false, 00:24:51.754 "enable_zerocopy_send_client": false, 00:24:51.754 "zerocopy_threshold": 0, 00:24:51.754 "tls_version": 0, 00:24:51.754 "enable_ktls": false 00:24:51.754 } 00:24:51.754 }, 00:24:51.754 { 00:24:51.754 "method": "sock_impl_set_options", 00:24:51.754 "params": { 00:24:51.754 "impl_name": "posix", 00:24:51.754 "recv_buf_size": 2097152, 00:24:51.754 "send_buf_size": 2097152, 00:24:51.754 "enable_recv_pipe": true, 00:24:51.754 "enable_quickack": false, 00:24:51.754 "enable_placement_id": 0, 00:24:51.754 "enable_zerocopy_send_server": true, 00:24:51.754 "enable_zerocopy_send_client": false, 00:24:51.754 "zerocopy_threshold": 0, 00:24:51.754 "tls_version": 0, 00:24:51.754 "enable_ktls": false 00:24:51.754 } 00:24:51.754 }, 00:24:51.754 { 00:24:51.754 "method": "sock_impl_set_options", 00:24:51.754 "params": { 00:24:51.754 "impl_name": "ssl", 00:24:51.754 "recv_buf_size": 4096, 00:24:51.754 "send_buf_size": 4096, 00:24:51.755 "enable_recv_pipe": true, 00:24:51.755 "enable_quickack": false, 00:24:51.755 "enable_placement_id": 0, 00:24:51.755 "enable_zerocopy_send_server": true, 00:24:51.755 "enable_zerocopy_send_client": false, 00:24:51.755 "zerocopy_threshold": 0, 00:24:51.755 "tls_version": 0, 00:24:51.755 "enable_ktls": false 00:24:51.755 } 00:24:51.755 } 00:24:51.755 ] 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "subsystem": "vmd", 00:24:51.755 "config": [] 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "subsystem": "accel", 00:24:51.755 "config": [ 00:24:51.755 { 00:24:51.755 "method": "accel_set_options", 00:24:51.755 "params": { 00:24:51.755 "small_cache_size": 128, 00:24:51.755 "large_cache_size": 16, 00:24:51.755 "task_count": 2048, 00:24:51.755 "sequence_count": 2048, 00:24:51.755 "buf_count": 2048 00:24:51.755 } 00:24:51.755 } 00:24:51.755 ] 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "subsystem": "bdev", 00:24:51.755 "config": [ 00:24:51.755 { 00:24:51.755 "method": "bdev_set_options", 00:24:51.755 "params": { 00:24:51.755 "bdev_io_pool_size": 65535, 00:24:51.755 "bdev_io_cache_size": 256, 00:24:51.755 "bdev_auto_examine": true, 00:24:51.755 "iobuf_small_cache_size": 128, 00:24:51.755 "iobuf_large_cache_size": 16 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "bdev_raid_set_options", 00:24:51.755 "params": { 00:24:51.755 "process_window_size_kb": 1024 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "bdev_iscsi_set_options", 00:24:51.755 "params": { 00:24:51.755 "timeout_sec": 30 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "bdev_nvme_set_options", 00:24:51.755 "params": { 00:24:51.755 "action_on_timeout": "none", 00:24:51.755 "timeout_us": 0, 00:24:51.755 "timeout_admin_us": 0, 00:24:51.755 "keep_alive_timeout_ms": 10000, 00:24:51.755 "arbitration_burst": 0, 00:24:51.755 "low_priority_weight": 0, 00:24:51.755 "medium_priority_weight": 0, 00:24:51.755 "high_priority_weight": 0, 00:24:51.755 "nvme_adminq_poll_period_us": 10000, 00:24:51.755 "nvme_ioq_poll_period_us": 0, 00:24:51.755 "io_queue_requests": 0, 00:24:51.755 "delay_cmd_submit": true, 00:24:51.755 "transport_retry_count": 4, 00:24:51.755 "bdev_retry_count": 3, 00:24:51.755 "transport_ack_timeout": 0, 00:24:51.755 "ctrlr_loss_timeout_sec": 0, 00:24:51.755 "reconnect_delay_sec": 0, 00:24:51.755 "fast_io_fail_timeout_sec": 0, 00:24:51.755 "disable_auto_failback": false, 00:24:51.755 "generate_uuids": false, 00:24:51.755 "transport_tos": 0, 00:24:51.755 "nvme_error_stat": false, 00:24:51.755 "rdma_srq_size": 0, 00:24:51.755 "io_path_stat": false, 00:24:51.755 "allow_accel_sequence": false, 00:24:51.755 "rdma_max_cq_size": 0, 00:24:51.755 "rdma_cm_event_timeout_ms": 0, 00:24:51.755 "dhchap_digests": [ 00:24:51.755 "sha256", 00:24:51.755 "sha384", 00:24:51.755 "sha512" 00:24:51.755 ], 00:24:51.755 "dhchap_dhgroups": [ 00:24:51.755 "null", 00:24:51.755 "ffdhe2048", 00:24:51.755 "ffdhe3072", 00:24:51.755 "ffdhe4096", 00:24:51.755 "ffdhe6144", 00:24:51.755 "ffdhe8192" 00:24:51.755 ] 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "bdev_nvme_set_hotplug", 00:24:51.755 "params": { 00:24:51.755 "period_us": 100000, 00:24:51.755 "enable": false 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "bdev_malloc_create", 00:24:51.755 "params": { 00:24:51.755 "name": "malloc0", 00:24:51.755 "num_blocks": 8192, 00:24:51.755 "block_size": 4096, 00:24:51.755 "physical_block_size": 4096, 00:24:51.755 "uuid": "f708913a-6acc-4ddf-b1b9-f1db7360e53d", 00:24:51.755 "optimal_io_boundary": 0 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "bdev_wait_for_examine" 00:24:51.755 } 00:24:51.755 ] 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "subsystem": "nbd", 00:24:51.755 "config": [] 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "subsystem": "scheduler", 00:24:51.755 "config": [ 00:24:51.755 { 00:24:51.755 "method": "framework_set_scheduler", 00:24:51.755 "params": { 00:24:51.755 "name": "static" 00:24:51.755 } 00:24:51.755 } 00:24:51.755 ] 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "subsystem": "nvmf", 00:24:51.755 "config": [ 00:24:51.755 { 00:24:51.755 "method": "nvmf_set_config", 00:24:51.755 "params": { 00:24:51.755 "discovery_filter": "match_any", 00:24:51.755 "admin_cmd_passthru": { 00:24:51.755 "identify_ctrlr": false 00:24:51.755 } 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_set_max_subsystems", 00:24:51.755 "params": { 00:24:51.755 "max_subsystems": 1024 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_set_crdt", 00:24:51.755 "params": { 00:24:51.755 "crdt1": 0, 00:24:51.755 "crdt2": 0, 00:24:51.755 "crdt3": 0 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_create_transport", 00:24:51.755 "params": { 00:24:51.755 "trtype": "TCP", 00:24:51.755 "max_queue_depth": 128, 00:24:51.755 "max_io_qpairs_per_ctrlr": 127, 00:24:51.755 "in_capsule_data_size": 4096, 00:24:51.755 "max_io_size": 131072, 00:24:51.755 "io_unit_size": 131072, 00:24:51.755 "max_aq_depth": 128, 00:24:51.755 "num_shared_buffers": 511, 00:24:51.755 "buf_cache_size": 4294967295, 00:24:51.755 "dif_insert_or_strip": false, 00:24:51.755 "zcopy": false, 00:24:51.755 "c2h_success": false, 00:24:51.755 "sock_priority": 0, 00:24:51.755 "abort_timeout_sec": 1, 00:24:51.755 "ack_timeout": 0 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_create_subsystem", 00:24:51.755 "params": { 00:24:51.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.755 "allow_any_host": false, 00:24:51.755 "serial_number": "00000000000000000000", 00:24:51.755 "model_number": "SPDK bdev Controller", 00:24:51.755 "max_namespaces": 32, 00:24:51.755 "min_cntlid": 1, 00:24:51.755 "max_cntlid": 65519, 00:24:51.755 "ana_reporting": false 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_subsystem_add_host", 00:24:51.755 "params": { 00:24:51.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.755 "host": "nqn.2016-06.io.spdk:host1", 00:24:51.755 "psk": "key0" 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_subsystem_add_ns", 00:24:51.755 "params": { 00:24:51.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.755 "namespace": { 00:24:51.755 "nsid": 1, 00:24:51.755 "bdev_name": "malloc0", 00:24:51.755 "nguid": "F708913A6ACC4DDFB1B9F1DB7360E53D", 00:24:51.755 "uuid": "f708913a-6acc-4ddf-b1b9-f1db7360e53d", 00:24:51.755 "no_auto_visible": false 00:24:51.755 } 00:24:51.755 } 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "method": "nvmf_subsystem_add_listener", 00:24:51.755 "params": { 00:24:51.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.755 "listen_address": { 00:24:51.755 "trtype": "TCP", 00:24:51.756 "adrfam": "IPv4", 00:24:51.756 "traddr": "10.0.0.2", 00:24:51.756 "trsvcid": "4420" 00:24:51.756 }, 00:24:51.756 "secure_channel": true 00:24:51.756 } 00:24:51.756 } 00:24:51.756 ] 00:24:51.756 } 00:24:51.756 ] 00:24:51.756 }' 00:24:51.756 09:53:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:51.756 09:53:14 -- common/autotest_common.sh@10 -- # set +x 00:24:51.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.756 09:53:14 -- nvmf/common.sh@470 -- # nvmfpid=74107 00:24:51.756 09:53:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:51.756 09:53:14 -- nvmf/common.sh@471 -- # waitforlisten 74107 00:24:51.756 09:53:14 -- common/autotest_common.sh@817 -- # '[' -z 74107 ']' 00:24:51.756 09:53:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.756 09:53:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.756 09:53:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.756 09:53:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.756 09:53:14 -- common/autotest_common.sh@10 -- # set +x 00:24:51.756 [2024-04-18 09:53:15.003308] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:51.756 [2024-04-18 09:53:15.003733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.015 [2024-04-18 09:53:15.179849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.274 [2024-04-18 09:53:15.429482] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.274 [2024-04-18 09:53:15.429703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.274 [2024-04-18 09:53:15.429871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.274 [2024-04-18 09:53:15.430031] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.274 [2024-04-18 09:53:15.430170] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.274 [2024-04-18 09:53:15.430341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.842 [2024-04-18 09:53:15.930686] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.842 [2024-04-18 09:53:15.962655] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:52.842 [2024-04-18 09:53:15.970322] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.842 09:53:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:52.842 09:53:16 -- common/autotest_common.sh@850 -- # return 0 00:24:52.842 09:53:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:52.842 09:53:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:52.842 09:53:16 -- common/autotest_common.sh@10 -- # set +x 00:24:52.842 09:53:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.842 09:53:16 -- target/tls.sh@272 -- # bdevperf_pid=74139 00:24:52.842 09:53:16 -- target/tls.sh@273 -- # waitforlisten 74139 /var/tmp/bdevperf.sock 00:24:52.842 09:53:16 -- common/autotest_common.sh@817 -- # '[' -z 74139 ']' 00:24:52.842 09:53:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.842 09:53:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:52.842 09:53:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.842 09:53:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:52.842 09:53:16 -- common/autotest_common.sh@10 -- # set +x 00:24:52.842 09:53:16 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:52.842 09:53:16 -- target/tls.sh@270 -- # echo '{ 00:24:52.842 "subsystems": [ 00:24:52.842 { 00:24:52.842 "subsystem": "keyring", 00:24:52.842 "config": [ 00:24:52.842 { 00:24:52.842 "method": "keyring_file_add_key", 00:24:52.842 "params": { 00:24:52.842 "name": "key0", 00:24:52.842 "path": "/tmp/tmp.jUoN3zF1yl" 00:24:52.842 } 00:24:52.842 } 00:24:52.842 ] 00:24:52.842 }, 00:24:52.842 { 00:24:52.842 "subsystem": "iobuf", 00:24:52.842 "config": [ 00:24:52.842 { 00:24:52.842 "method": "iobuf_set_options", 00:24:52.842 "params": { 00:24:52.842 "small_pool_count": 8192, 00:24:52.842 "large_pool_count": 1024, 00:24:52.842 "small_bufsize": 8192, 00:24:52.842 "large_bufsize": 135168 00:24:52.842 } 00:24:52.842 } 00:24:52.842 ] 00:24:52.842 }, 00:24:52.842 { 00:24:52.842 "subsystem": "sock", 00:24:52.842 "config": [ 00:24:52.842 { 00:24:52.842 "method": "sock_impl_set_options", 00:24:52.842 "params": { 00:24:52.842 "impl_name": "uring", 00:24:52.842 "recv_buf_size": 2097152, 00:24:52.842 "send_buf_size": 2097152, 00:24:52.842 "enable_recv_pipe": true, 00:24:52.842 "enable_quickack": false, 00:24:52.842 "enable_placement_id": 0, 00:24:52.842 "enable_zerocopy_send_server": false, 00:24:52.842 "enable_zerocopy_send_client": false, 00:24:52.843 "zerocopy_threshold": 0, 00:24:52.843 "tls_version": 0, 00:24:52.843 "enable_ktls": false 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "sock_impl_set_options", 00:24:52.843 "params": { 00:24:52.843 "impl_name": "posix", 00:24:52.843 "recv_buf_size": 2097152, 00:24:52.843 "send_buf_size": 2097152, 00:24:52.843 "enable_recv_pipe": true, 00:24:52.843 "enable_quickack": false, 00:24:52.843 "enable_placement_id": 0, 00:24:52.843 "enable_zerocopy_send_server": true, 00:24:52.843 "enable_zerocopy_send_client": false, 00:24:52.843 "zerocopy_threshold": 0, 00:24:52.843 "tls_version": 0, 00:24:52.843 "enable_ktls": false 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "sock_impl_set_options", 00:24:52.843 "params": { 00:24:52.843 "impl_name": "ssl", 00:24:52.843 "recv_buf_size": 4096, 00:24:52.843 "send_buf_size": 4096, 00:24:52.843 "enable_recv_pipe": true, 00:24:52.843 "enable_quickack": false, 00:24:52.843 "enable_placement_id": 0, 00:24:52.843 "enable_zerocopy_send_server": true, 00:24:52.843 "enable_zerocopy_send_client": false, 00:24:52.843 "zerocopy_threshold": 0, 00:24:52.843 "tls_version": 0, 00:24:52.843 "enable_ktls": false 00:24:52.843 } 00:24:52.843 } 00:24:52.843 ] 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "subsystem": "vmd", 00:24:52.843 "config": [] 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "subsystem": "accel", 00:24:52.843 "config": [ 00:24:52.843 { 00:24:52.843 "method": "accel_set_options", 00:24:52.843 "params": { 00:24:52.843 "small_cache_size": 128, 00:24:52.843 "large_cache_size": 16, 00:24:52.843 "task_count": 2048, 00:24:52.843 "sequence_count": 2048, 00:24:52.843 "buf_count": 2048 00:24:52.843 } 00:24:52.843 } 00:24:52.843 ] 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "subsystem": "bdev", 00:24:52.843 "config": [ 00:24:52.843 { 00:24:52.843 "method": "bdev_set_options", 00:24:52.843 "params": { 00:24:52.843 "bdev_io_pool_size": 65535, 00:24:52.843 "bdev_io_cache_size": 256, 00:24:52.843 "bdev_auto_examine": true, 00:24:52.843 "iobuf_small_cache_size": 128, 00:24:52.843 "iobuf_large_cache_size": 16 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_raid_set_options", 00:24:52.843 "params": { 00:24:52.843 "process_window_size_kb": 1024 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_iscsi_set_options", 00:24:52.843 "params": { 00:24:52.843 "timeout_sec": 30 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_nvme_set_options", 00:24:52.843 "params": { 00:24:52.843 "action_on_timeout": "none", 00:24:52.843 "timeout_us": 0, 00:24:52.843 "timeout_admin_us": 0, 00:24:52.843 "keep_alive_timeout_ms": 10000, 00:24:52.843 "arbitration_burst": 0, 00:24:52.843 "low_priority_weight": 0, 00:24:52.843 "medium_priority_weight": 0, 00:24:52.843 "high_priority_weight": 0, 00:24:52.843 "nvme_adminq_poll_period_us": 10000, 00:24:52.843 "nvme_ioq_poll_period_us": 0, 00:24:52.843 "io_queue_requests": 512, 00:24:52.843 "delay_cmd_submit": true, 00:24:52.843 "transport_retry_count": 4, 00:24:52.843 "bdev_retry_count": 3, 00:24:52.843 "transport_ack_timeout": 0, 00:24:52.843 "ctrlr_loss_timeout_sec": 0, 00:24:52.843 "reconnect_delay_sec": 0, 00:24:52.843 "fast_io_fail_timeout_sec": 0, 00:24:52.843 "disable_auto_failback": false, 00:24:52.843 "generate_uuids": false, 00:24:52.843 "transport_tos": 0, 00:24:52.843 "nvme_error_stat": false, 00:24:52.843 "rdma_srq_size": 0, 00:24:52.843 "io_path_stat": false, 00:24:52.843 "allow_accel_sequence": false, 00:24:52.843 "rdma_max_cq_size": 0, 00:24:52.843 "rdma_cm_event_timeout_ms": 0, 00:24:52.843 "dhchap_digests": [ 00:24:52.843 "sha256", 00:24:52.843 "sha384", 00:24:52.843 "sha512" 00:24:52.843 ], 00:24:52.843 "dhchap_dhgroups": [ 00:24:52.843 "null", 00:24:52.843 "ffdhe2048", 00:24:52.843 "ffdhe3072", 00:24:52.843 "ffdhe4096", 00:24:52.843 "ffdhe6144", 00:24:52.843 "ffdhe8192" 00:24:52.843 ] 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_nvme_attach_controller", 00:24:52.843 "params": { 00:24:52.843 "name": "nvme0", 00:24:52.843 "trtype": "TCP", 00:24:52.843 "adrfam": "IPv4", 00:24:52.843 "traddr": "10.0.0.2", 00:24:52.843 "trsvcid": "4420", 00:24:52.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.843 "prchk_reftag": false, 00:24:52.843 "prchk_guard": false, 00:24:52.843 "ctrlr_loss_timeout_sec": 0, 00:24:52.843 "reconnect_delay_sec": 0, 00:24:52.843 "fast_io_fail_timeout_sec": 0, 00:24:52.843 "psk": "key0", 00:24:52.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.843 "hdgst": false, 00:24:52.843 "ddgst": false 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_nvme_set_hotplug", 00:24:52.843 "params": { 00:24:52.843 "period_us": 100000, 00:24:52.843 "enable": false 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_enable_histogram", 00:24:52.843 "params": { 00:24:52.843 "name": "nvme0n1", 00:24:52.843 "enable": true 00:24:52.843 } 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "method": "bdev_wait_for_examine" 00:24:52.843 } 00:24:52.843 ] 00:24:52.843 }, 00:24:52.843 { 00:24:52.843 "subsystem": "nbd", 00:24:52.843 "config": [] 00:24:52.843 } 00:24:52.843 ] 00:24:52.843 }' 00:24:52.843 [2024-04-18 09:53:16.149831] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:52.843 [2024-04-18 09:53:16.150215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74139 ] 00:24:53.103 [2024-04-18 09:53:16.327030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.363 [2024-04-18 09:53:16.609761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.622 [2024-04-18 09:53:17.012816] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.880 09:53:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.880 09:53:17 -- common/autotest_common.sh@850 -- # return 0 00:24:53.880 09:53:17 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.880 09:53:17 -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:54.139 09:53:17 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.139 09:53:17 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.398 Running I/O for 1 seconds... 00:24:55.338 00:24:55.338 Latency(us) 00:24:55.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.338 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:55.338 Verification LBA range: start 0x0 length 0x2000 00:24:55.338 nvme0n1 : 1.07 2500.98 9.77 0.00 0.00 50051.24 13047.62 46232.67 00:24:55.338 =================================================================================================================== 00:24:55.338 Total : 2500.98 9.77 0.00 0.00 50051.24 13047.62 46232.67 00:24:55.338 0 00:24:55.338 09:53:18 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:55.338 09:53:18 -- target/tls.sh@279 -- # cleanup 00:24:55.338 09:53:18 -- target/tls.sh@15 -- # process_shm --id 0 00:24:55.338 09:53:18 -- common/autotest_common.sh@794 -- # type=--id 00:24:55.338 09:53:18 -- common/autotest_common.sh@795 -- # id=0 00:24:55.338 09:53:18 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:24:55.338 09:53:18 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:55.338 09:53:18 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:24:55.338 09:53:18 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:24:55.338 09:53:18 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:24:55.338 09:53:18 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:55.338 nvmf_trace.0 00:24:55.338 09:53:18 -- common/autotest_common.sh@809 -- # return 0 00:24:55.338 09:53:18 -- target/tls.sh@16 -- # killprocess 74139 00:24:55.338 09:53:18 -- common/autotest_common.sh@936 -- # '[' -z 74139 ']' 00:24:55.338 09:53:18 -- common/autotest_common.sh@940 -- # kill -0 74139 00:24:55.338 09:53:18 -- common/autotest_common.sh@941 -- # uname 00:24:55.338 09:53:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.338 09:53:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74139 00:24:55.598 killing process with pid 74139 00:24:55.598 Received shutdown signal, test time was about 1.000000 seconds 00:24:55.598 00:24:55.598 Latency(us) 00:24:55.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.598 =================================================================================================================== 00:24:55.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.598 09:53:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:55.598 09:53:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:55.598 09:53:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74139' 00:24:55.598 09:53:18 -- common/autotest_common.sh@955 -- # kill 74139 00:24:55.598 09:53:18 -- common/autotest_common.sh@960 -- # wait 74139 00:24:56.536 09:53:19 -- target/tls.sh@17 -- # nvmftestfini 00:24:56.536 09:53:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:56.536 09:53:19 -- nvmf/common.sh@117 -- # sync 00:24:56.795 09:53:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.795 09:53:19 -- nvmf/common.sh@120 -- # set +e 00:24:56.795 09:53:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.795 09:53:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.795 rmmod nvme_tcp 00:24:56.795 rmmod nvme_fabrics 00:24:56.795 rmmod nvme_keyring 00:24:56.795 09:53:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.795 09:53:20 -- nvmf/common.sh@124 -- # set -e 00:24:56.795 09:53:20 -- nvmf/common.sh@125 -- # return 0 00:24:56.795 09:53:20 -- nvmf/common.sh@478 -- # '[' -n 74107 ']' 00:24:56.795 09:53:20 -- nvmf/common.sh@479 -- # killprocess 74107 00:24:56.795 09:53:20 -- common/autotest_common.sh@936 -- # '[' -z 74107 ']' 00:24:56.795 09:53:20 -- common/autotest_common.sh@940 -- # kill -0 74107 00:24:56.795 09:53:20 -- common/autotest_common.sh@941 -- # uname 00:24:56.795 09:53:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:56.795 09:53:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74107 00:24:56.795 killing process with pid 74107 00:24:56.795 09:53:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:56.795 09:53:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:56.795 09:53:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74107' 00:24:56.795 09:53:20 -- common/autotest_common.sh@955 -- # kill 74107 00:24:56.795 09:53:20 -- common/autotest_common.sh@960 -- # wait 74107 00:24:58.172 09:53:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:58.172 09:53:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:58.172 09:53:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:58.172 09:53:21 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.172 09:53:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.172 09:53:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.172 09:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.172 09:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.172 09:53:21 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:58.172 09:53:21 -- target/tls.sh@18 -- # rm -f /tmp/tmp.IkxghsqSJW /tmp/tmp.7b4f0eEEsC /tmp/tmp.jUoN3zF1yl 00:24:58.172 ************************************ 00:24:58.172 END TEST nvmf_tls 00:24:58.172 ************************************ 00:24:58.172 00:24:58.172 real 1m47.871s 00:24:58.172 user 2m52.149s 00:24:58.172 sys 0m28.111s 00:24:58.172 09:53:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:58.172 09:53:21 -- common/autotest_common.sh@10 -- # set +x 00:24:58.172 09:53:21 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:58.172 09:53:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:58.172 09:53:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:58.172 09:53:21 -- common/autotest_common.sh@10 -- # set +x 00:24:58.172 ************************************ 00:24:58.172 START TEST nvmf_fips 00:24:58.172 ************************************ 00:24:58.172 09:53:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:58.172 * Looking for test storage... 00:24:58.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:24:58.172 09:53:21 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.172 09:53:21 -- nvmf/common.sh@7 -- # uname -s 00:24:58.172 09:53:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.172 09:53:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.172 09:53:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.172 09:53:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.172 09:53:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.172 09:53:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.172 09:53:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.173 09:53:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.173 09:53:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.173 09:53:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.432 09:53:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:24:58.432 09:53:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:24:58.433 09:53:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.433 09:53:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.433 09:53:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.433 09:53:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.433 09:53:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.433 09:53:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.433 09:53:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.433 09:53:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.433 09:53:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.433 09:53:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.433 09:53:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.433 09:53:21 -- paths/export.sh@5 -- # export PATH 00:24:58.433 09:53:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.433 09:53:21 -- nvmf/common.sh@47 -- # : 0 00:24:58.433 09:53:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.433 09:53:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.433 09:53:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.433 09:53:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.433 09:53:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.433 09:53:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.433 09:53:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.433 09:53:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.433 09:53:21 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:58.433 09:53:21 -- fips/fips.sh@89 -- # check_openssl_version 00:24:58.433 09:53:21 -- fips/fips.sh@83 -- # local target=3.0.0 00:24:58.433 09:53:21 -- fips/fips.sh@85 -- # openssl version 00:24:58.433 09:53:21 -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:58.433 09:53:21 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:58.433 09:53:21 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:58.433 09:53:21 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:58.433 09:53:21 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:58.433 09:53:21 -- scripts/common.sh@333 -- # IFS=.-: 00:24:58.433 09:53:21 -- scripts/common.sh@333 -- # read -ra ver1 00:24:58.433 09:53:21 -- scripts/common.sh@334 -- # IFS=.-: 00:24:58.433 09:53:21 -- scripts/common.sh@334 -- # read -ra ver2 00:24:58.433 09:53:21 -- scripts/common.sh@335 -- # local 'op=>=' 00:24:58.433 09:53:21 -- scripts/common.sh@337 -- # ver1_l=3 00:24:58.433 09:53:21 -- scripts/common.sh@338 -- # ver2_l=3 00:24:58.433 09:53:21 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:58.433 09:53:21 -- scripts/common.sh@341 -- # case "$op" in 00:24:58.433 09:53:21 -- scripts/common.sh@345 -- # : 1 00:24:58.433 09:53:21 -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:58.433 09:53:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.433 09:53:21 -- scripts/common.sh@362 -- # decimal 3 00:24:58.433 09:53:21 -- scripts/common.sh@350 -- # local d=3 00:24:58.433 09:53:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:58.433 09:53:21 -- scripts/common.sh@352 -- # echo 3 00:24:58.433 09:53:21 -- scripts/common.sh@362 -- # ver1[v]=3 00:24:58.433 09:53:21 -- scripts/common.sh@363 -- # decimal 3 00:24:58.433 09:53:21 -- scripts/common.sh@350 -- # local d=3 00:24:58.433 09:53:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:58.433 09:53:21 -- scripts/common.sh@352 -- # echo 3 00:24:58.433 09:53:21 -- scripts/common.sh@363 -- # ver2[v]=3 00:24:58.433 09:53:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:58.433 09:53:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:58.433 09:53:21 -- scripts/common.sh@361 -- # (( v++ )) 00:24:58.433 09:53:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.433 09:53:21 -- scripts/common.sh@362 -- # decimal 0 00:24:58.433 09:53:21 -- scripts/common.sh@350 -- # local d=0 00:24:58.433 09:53:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:58.433 09:53:21 -- scripts/common.sh@352 -- # echo 0 00:24:58.433 09:53:21 -- scripts/common.sh@362 -- # ver1[v]=0 00:24:58.433 09:53:21 -- scripts/common.sh@363 -- # decimal 0 00:24:58.433 09:53:21 -- scripts/common.sh@350 -- # local d=0 00:24:58.433 09:53:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:58.433 09:53:21 -- scripts/common.sh@352 -- # echo 0 00:24:58.433 09:53:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:24:58.433 09:53:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:58.433 09:53:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:58.433 09:53:21 -- scripts/common.sh@361 -- # (( v++ )) 00:24:58.433 09:53:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.433 09:53:21 -- scripts/common.sh@362 -- # decimal 9 00:24:58.433 09:53:21 -- scripts/common.sh@350 -- # local d=9 00:24:58.433 09:53:21 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:58.433 09:53:21 -- scripts/common.sh@352 -- # echo 9 00:24:58.433 09:53:21 -- scripts/common.sh@362 -- # ver1[v]=9 00:24:58.433 09:53:21 -- scripts/common.sh@363 -- # decimal 0 00:24:58.433 09:53:21 -- scripts/common.sh@350 -- # local d=0 00:24:58.433 09:53:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:58.433 09:53:21 -- scripts/common.sh@352 -- # echo 0 00:24:58.433 09:53:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:24:58.433 09:53:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:58.433 09:53:21 -- scripts/common.sh@364 -- # return 0 00:24:58.433 09:53:21 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:58.433 09:53:21 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:58.433 09:53:21 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:58.433 09:53:21 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:58.433 09:53:21 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:58.433 09:53:21 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:58.433 09:53:21 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:58.433 09:53:21 -- fips/fips.sh@113 -- # build_openssl_config 00:24:58.433 09:53:21 -- fips/fips.sh@37 -- # cat 00:24:58.433 09:53:21 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:58.433 09:53:21 -- fips/fips.sh@58 -- # cat - 00:24:58.433 09:53:21 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:58.433 09:53:21 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:58.433 09:53:21 -- fips/fips.sh@116 -- # mapfile -t providers 00:24:58.433 09:53:21 -- fips/fips.sh@116 -- # openssl list -providers 00:24:58.433 09:53:21 -- fips/fips.sh@116 -- # grep name 00:24:58.433 09:53:21 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:58.433 09:53:21 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:58.433 09:53:21 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:58.433 09:53:21 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:58.433 09:53:21 -- fips/fips.sh@127 -- # : 00:24:58.433 09:53:21 -- common/autotest_common.sh@638 -- # local es=0 00:24:58.433 09:53:21 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:58.433 09:53:21 -- common/autotest_common.sh@626 -- # local arg=openssl 00:24:58.433 09:53:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:58.433 09:53:21 -- common/autotest_common.sh@630 -- # type -t openssl 00:24:58.433 09:53:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:58.433 09:53:21 -- common/autotest_common.sh@632 -- # type -P openssl 00:24:58.433 09:53:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:58.433 09:53:21 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:24:58.433 09:53:21 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:24:58.433 09:53:21 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:24:58.433 Error setting digest 00:24:58.434 00F217897B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:58.434 00F217897B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:58.434 09:53:21 -- common/autotest_common.sh@641 -- # es=1 00:24:58.434 09:53:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:58.434 09:53:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:58.434 09:53:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:58.434 09:53:21 -- fips/fips.sh@130 -- # nvmftestinit 00:24:58.434 09:53:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:58.434 09:53:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.434 09:53:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:58.434 09:53:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:58.434 09:53:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:58.434 09:53:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.434 09:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.434 09:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.434 09:53:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:58.434 09:53:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:58.434 09:53:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:58.434 09:53:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:58.434 09:53:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:58.434 09:53:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:58.434 09:53:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.434 09:53:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.434 09:53:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:58.434 09:53:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:58.434 09:53:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:58.434 09:53:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:58.434 09:53:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:58.434 09:53:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.434 09:53:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:58.434 09:53:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:58.434 09:53:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:58.434 09:53:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:58.434 09:53:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:58.434 09:53:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:58.434 Cannot find device "nvmf_tgt_br" 00:24:58.434 09:53:21 -- nvmf/common.sh@155 -- # true 00:24:58.434 09:53:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.434 Cannot find device "nvmf_tgt_br2" 00:24:58.434 09:53:21 -- nvmf/common.sh@156 -- # true 00:24:58.434 09:53:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:58.434 09:53:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:58.695 Cannot find device "nvmf_tgt_br" 00:24:58.695 09:53:21 -- nvmf/common.sh@158 -- # true 00:24:58.695 09:53:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:58.695 Cannot find device "nvmf_tgt_br2" 00:24:58.695 09:53:21 -- nvmf/common.sh@159 -- # true 00:24:58.695 09:53:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:58.695 09:53:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:58.695 09:53:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.696 09:53:21 -- nvmf/common.sh@162 -- # true 00:24:58.696 09:53:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.696 09:53:21 -- nvmf/common.sh@163 -- # true 00:24:58.696 09:53:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:58.696 09:53:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:58.696 09:53:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:58.696 09:53:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:58.696 09:53:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:58.696 09:53:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.696 09:53:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.696 09:53:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:58.696 09:53:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:58.696 09:53:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:58.696 09:53:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:58.696 09:53:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:58.696 09:53:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:58.696 09:53:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.696 09:53:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.696 09:53:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.696 09:53:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:58.696 09:53:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:58.696 09:53:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.696 09:53:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.696 09:53:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.696 09:53:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.696 09:53:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.696 09:53:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:58.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:24:58.696 00:24:58.696 --- 10.0.0.2 ping statistics --- 00:24:58.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.696 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:58.696 09:53:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:58.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:24:58.696 00:24:58.696 --- 10.0.0.3 ping statistics --- 00:24:58.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.696 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:58.696 09:53:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:58.954 00:24:58.954 --- 10.0.0.1 ping statistics --- 00:24:58.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.954 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:58.954 09:53:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.954 09:53:22 -- nvmf/common.sh@422 -- # return 0 00:24:58.954 09:53:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:58.954 09:53:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.955 09:53:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:58.955 09:53:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:58.955 09:53:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.955 09:53:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:58.955 09:53:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:58.955 09:53:22 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:58.955 09:53:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:58.955 09:53:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:58.955 09:53:22 -- common/autotest_common.sh@10 -- # set +x 00:24:58.955 09:53:22 -- nvmf/common.sh@470 -- # nvmfpid=74437 00:24:58.955 09:53:22 -- nvmf/common.sh@471 -- # waitforlisten 74437 00:24:58.955 09:53:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:58.955 09:53:22 -- common/autotest_common.sh@817 -- # '[' -z 74437 ']' 00:24:58.955 09:53:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.955 09:53:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.955 09:53:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.955 09:53:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.955 09:53:22 -- common/autotest_common.sh@10 -- # set +x 00:24:58.955 [2024-04-18 09:53:22.274475] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:58.955 [2024-04-18 09:53:22.274635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.213 [2024-04-18 09:53:22.454248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.471 [2024-04-18 09:53:22.733072] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.471 [2024-04-18 09:53:22.733146] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.471 [2024-04-18 09:53:22.733171] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.471 [2024-04-18 09:53:22.733184] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.471 [2024-04-18 09:53:22.733198] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.471 [2024-04-18 09:53:22.733240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.038 09:53:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:00.038 09:53:23 -- common/autotest_common.sh@850 -- # return 0 00:25:00.038 09:53:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:00.038 09:53:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:00.038 09:53:23 -- common/autotest_common.sh@10 -- # set +x 00:25:00.038 09:53:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.038 09:53:23 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:00.038 09:53:23 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:00.038 09:53:23 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:00.038 09:53:23 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:00.039 09:53:23 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:00.039 09:53:23 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:00.039 09:53:23 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:00.039 09:53:23 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.297 [2024-04-18 09:53:23.495652] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.297 [2024-04-18 09:53:23.511568] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:00.297 [2024-04-18 09:53:23.511969] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.297 [2024-04-18 09:53:23.569550] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:00.297 malloc0 00:25:00.297 09:53:23 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.297 09:53:23 -- fips/fips.sh@147 -- # bdevperf_pid=74477 00:25:00.297 09:53:23 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:00.297 09:53:23 -- fips/fips.sh@148 -- # waitforlisten 74477 /var/tmp/bdevperf.sock 00:25:00.297 09:53:23 -- common/autotest_common.sh@817 -- # '[' -z 74477 ']' 00:25:00.297 09:53:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.297 09:53:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.297 09:53:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.297 09:53:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.297 09:53:23 -- common/autotest_common.sh@10 -- # set +x 00:25:00.555 [2024-04-18 09:53:23.772365] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:00.555 [2024-04-18 09:53:23.772752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74477 ] 00:25:00.555 [2024-04-18 09:53:23.945491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.124 [2024-04-18 09:53:24.238484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.383 09:53:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.383 09:53:24 -- common/autotest_common.sh@850 -- # return 0 00:25:01.383 09:53:24 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:01.641 [2024-04-18 09:53:24.947464] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.641 [2024-04-18 09:53:24.947673] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:01.906 TLSTESTn1 00:25:01.906 09:53:25 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.906 Running I/O for 10 seconds... 00:25:11.888 00:25:11.888 Latency(us) 00:25:11.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.888 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:11.888 Verification LBA range: start 0x0 length 0x2000 00:25:11.888 TLSTESTn1 : 10.03 2690.48 10.51 0.00 0.00 47472.57 9115.46 47662.55 00:25:11.888 =================================================================================================================== 00:25:11.888 Total : 2690.48 10.51 0.00 0.00 47472.57 9115.46 47662.55 00:25:11.888 0 00:25:11.888 09:53:35 -- fips/fips.sh@1 -- # cleanup 00:25:11.888 09:53:35 -- fips/fips.sh@15 -- # process_shm --id 0 00:25:11.888 09:53:35 -- common/autotest_common.sh@794 -- # type=--id 00:25:11.888 09:53:35 -- common/autotest_common.sh@795 -- # id=0 00:25:11.888 09:53:35 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:25:11.888 09:53:35 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:11.888 09:53:35 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:25:11.888 09:53:35 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:25:11.888 09:53:35 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:25:11.888 09:53:35 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:11.888 nvmf_trace.0 00:25:12.147 09:53:35 -- common/autotest_common.sh@809 -- # return 0 00:25:12.147 09:53:35 -- fips/fips.sh@16 -- # killprocess 74477 00:25:12.147 09:53:35 -- common/autotest_common.sh@936 -- # '[' -z 74477 ']' 00:25:12.147 09:53:35 -- common/autotest_common.sh@940 -- # kill -0 74477 00:25:12.147 09:53:35 -- common/autotest_common.sh@941 -- # uname 00:25:12.147 09:53:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.147 09:53:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74477 00:25:12.147 killing process with pid 74477 00:25:12.147 Received shutdown signal, test time was about 10.000000 seconds 00:25:12.147 00:25:12.147 Latency(us) 00:25:12.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.147 =================================================================================================================== 00:25:12.147 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.147 09:53:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:12.147 09:53:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:12.147 09:53:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74477' 00:25:12.147 09:53:35 -- common/autotest_common.sh@955 -- # kill 74477 00:25:12.147 [2024-04-18 09:53:35.316090] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:12.147 09:53:35 -- common/autotest_common.sh@960 -- # wait 74477 00:25:13.522 09:53:36 -- fips/fips.sh@17 -- # nvmftestfini 00:25:13.522 09:53:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:13.522 09:53:36 -- nvmf/common.sh@117 -- # sync 00:25:13.522 09:53:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.522 09:53:36 -- nvmf/common.sh@120 -- # set +e 00:25:13.522 09:53:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.522 09:53:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.522 rmmod nvme_tcp 00:25:13.522 rmmod nvme_fabrics 00:25:13.522 rmmod nvme_keyring 00:25:13.522 09:53:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.522 09:53:36 -- nvmf/common.sh@124 -- # set -e 00:25:13.522 09:53:36 -- nvmf/common.sh@125 -- # return 0 00:25:13.522 09:53:36 -- nvmf/common.sh@478 -- # '[' -n 74437 ']' 00:25:13.522 09:53:36 -- nvmf/common.sh@479 -- # killprocess 74437 00:25:13.522 09:53:36 -- common/autotest_common.sh@936 -- # '[' -z 74437 ']' 00:25:13.522 09:53:36 -- common/autotest_common.sh@940 -- # kill -0 74437 00:25:13.522 09:53:36 -- common/autotest_common.sh@941 -- # uname 00:25:13.522 09:53:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.522 09:53:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74437 00:25:13.522 killing process with pid 74437 00:25:13.522 09:53:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:13.522 09:53:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:13.522 09:53:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74437' 00:25:13.522 09:53:36 -- common/autotest_common.sh@955 -- # kill 74437 00:25:13.522 [2024-04-18 09:53:36.647340] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:13.522 09:53:36 -- common/autotest_common.sh@960 -- # wait 74437 00:25:14.899 09:53:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:14.899 09:53:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:14.899 09:53:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:14.899 09:53:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.899 09:53:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.899 09:53:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.899 09:53:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.899 09:53:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.899 09:53:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:14.899 09:53:37 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:14.899 ************************************ 00:25:14.899 END TEST nvmf_fips 00:25:14.899 ************************************ 00:25:14.899 00:25:14.899 real 0m16.502s 00:25:14.899 user 0m23.640s 00:25:14.899 sys 0m5.445s 00:25:14.899 09:53:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:14.899 09:53:37 -- common/autotest_common.sh@10 -- # set +x 00:25:14.899 09:53:38 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:25:14.899 09:53:38 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:25:14.899 09:53:38 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:25:14.899 09:53:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:14.899 09:53:38 -- common/autotest_common.sh@10 -- # set +x 00:25:14.899 09:53:38 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:25:14.899 09:53:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:14.899 09:53:38 -- common/autotest_common.sh@10 -- # set +x 00:25:14.899 09:53:38 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:25:14.899 09:53:38 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:14.899 09:53:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:14.899 09:53:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:14.899 09:53:38 -- common/autotest_common.sh@10 -- # set +x 00:25:14.899 ************************************ 00:25:14.899 START TEST nvmf_identify 00:25:14.899 ************************************ 00:25:14.899 09:53:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:14.899 * Looking for test storage... 00:25:14.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:14.899 09:53:38 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:14.899 09:53:38 -- nvmf/common.sh@7 -- # uname -s 00:25:14.899 09:53:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.899 09:53:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.899 09:53:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.899 09:53:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.899 09:53:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.899 09:53:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.899 09:53:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.899 09:53:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.899 09:53:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.899 09:53:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.899 09:53:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:14.899 09:53:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:14.899 09:53:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.899 09:53:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.899 09:53:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:14.899 09:53:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.899 09:53:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.899 09:53:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.899 09:53:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.899 09:53:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.899 09:53:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.899 09:53:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.899 09:53:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.899 09:53:38 -- paths/export.sh@5 -- # export PATH 00:25:14.899 09:53:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.899 09:53:38 -- nvmf/common.sh@47 -- # : 0 00:25:14.899 09:53:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.899 09:53:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.899 09:53:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.899 09:53:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.899 09:53:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.899 09:53:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.899 09:53:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.899 09:53:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.899 09:53:38 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.899 09:53:38 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.899 09:53:38 -- host/identify.sh@14 -- # nvmftestinit 00:25:14.899 09:53:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:14.899 09:53:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.899 09:53:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:14.899 09:53:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:14.899 09:53:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:14.899 09:53:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.899 09:53:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.899 09:53:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.899 09:53:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:14.899 09:53:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:14.899 09:53:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:14.899 09:53:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:14.899 09:53:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:14.899 09:53:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:14.899 09:53:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.899 09:53:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.899 09:53:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:14.899 09:53:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:14.899 09:53:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:14.899 09:53:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:14.899 09:53:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:14.899 09:53:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.899 09:53:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:14.899 09:53:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:14.899 09:53:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:14.899 09:53:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:14.899 09:53:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:14.899 09:53:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:14.899 Cannot find device "nvmf_tgt_br" 00:25:14.899 09:53:38 -- nvmf/common.sh@155 -- # true 00:25:14.899 09:53:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:15.158 Cannot find device "nvmf_tgt_br2" 00:25:15.158 09:53:38 -- nvmf/common.sh@156 -- # true 00:25:15.158 09:53:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:15.158 09:53:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:15.158 Cannot find device "nvmf_tgt_br" 00:25:15.158 09:53:38 -- nvmf/common.sh@158 -- # true 00:25:15.158 09:53:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:15.158 Cannot find device "nvmf_tgt_br2" 00:25:15.158 09:53:38 -- nvmf/common.sh@159 -- # true 00:25:15.158 09:53:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:15.158 09:53:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:15.158 09:53:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:15.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.158 09:53:38 -- nvmf/common.sh@162 -- # true 00:25:15.158 09:53:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:15.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.158 09:53:38 -- nvmf/common.sh@163 -- # true 00:25:15.158 09:53:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:15.158 09:53:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:15.158 09:53:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:15.158 09:53:38 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:15.158 09:53:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:15.158 09:53:38 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:15.158 09:53:38 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:15.158 09:53:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:15.158 09:53:38 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:15.158 09:53:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:15.158 09:53:38 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:15.158 09:53:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:15.158 09:53:38 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:15.158 09:53:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:15.416 09:53:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:15.416 09:53:38 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:15.416 09:53:38 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:15.416 09:53:38 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:15.416 09:53:38 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:15.416 09:53:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:15.416 09:53:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:15.416 09:53:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:15.416 09:53:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:15.416 09:53:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:15.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:25:15.416 00:25:15.416 --- 10.0.0.2 ping statistics --- 00:25:15.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.416 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:25:15.416 09:53:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:15.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:15.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:25:15.416 00:25:15.416 --- 10.0.0.3 ping statistics --- 00:25:15.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.416 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:15.417 09:53:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:15.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:15.417 00:25:15.417 --- 10.0.0.1 ping statistics --- 00:25:15.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.417 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:15.417 09:53:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.417 09:53:38 -- nvmf/common.sh@422 -- # return 0 00:25:15.417 09:53:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:15.417 09:53:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.417 09:53:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:15.417 09:53:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:15.417 09:53:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.417 09:53:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:15.417 09:53:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:15.417 09:53:38 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:15.417 09:53:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:15.417 09:53:38 -- common/autotest_common.sh@10 -- # set +x 00:25:15.417 09:53:38 -- host/identify.sh@19 -- # nvmfpid=74850 00:25:15.417 09:53:38 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.417 09:53:38 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.417 09:53:38 -- host/identify.sh@23 -- # waitforlisten 74850 00:25:15.417 09:53:38 -- common/autotest_common.sh@817 -- # '[' -z 74850 ']' 00:25:15.417 09:53:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.417 09:53:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:15.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.417 09:53:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.417 09:53:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:15.417 09:53:38 -- common/autotest_common.sh@10 -- # set +x 00:25:15.417 [2024-04-18 09:53:38.791718] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:15.417 [2024-04-18 09:53:38.791898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.675 [2024-04-18 09:53:38.969436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.932 [2024-04-18 09:53:39.259033] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.932 [2024-04-18 09:53:39.259100] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.932 [2024-04-18 09:53:39.259121] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.932 [2024-04-18 09:53:39.259150] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.932 [2024-04-18 09:53:39.259165] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.932 [2024-04-18 09:53:39.259430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.933 [2024-04-18 09:53:39.259548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.933 [2024-04-18 09:53:39.260038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.933 [2024-04-18 09:53:39.260043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.499 09:53:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.499 09:53:39 -- common/autotest_common.sh@850 -- # return 0 00:25:16.499 09:53:39 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.499 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.499 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.499 [2024-04-18 09:53:39.721041] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.499 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.499 09:53:39 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:16.499 09:53:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:16.499 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.499 09:53:39 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:16.499 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.499 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.499 Malloc0 00:25:16.499 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.499 09:53:39 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:16.499 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.499 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.757 09:53:39 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:16.757 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.757 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.757 09:53:39 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.757 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.757 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 [2024-04-18 09:53:39.908065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.757 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.757 09:53:39 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:16.757 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.757 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.757 09:53:39 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:16.757 09:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.757 09:53:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 [2024-04-18 09:53:39.923701] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:16.757 [ 00:25:16.757 { 00:25:16.757 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:16.757 "subtype": "Discovery", 00:25:16.757 "listen_addresses": [ 00:25:16.757 { 00:25:16.757 "transport": "TCP", 00:25:16.757 "trtype": "TCP", 00:25:16.757 "adrfam": "IPv4", 00:25:16.757 "traddr": "10.0.0.2", 00:25:16.757 "trsvcid": "4420" 00:25:16.757 } 00:25:16.757 ], 00:25:16.757 "allow_any_host": true, 00:25:16.757 "hosts": [] 00:25:16.757 }, 00:25:16.757 { 00:25:16.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.757 "subtype": "NVMe", 00:25:16.757 "listen_addresses": [ 00:25:16.757 { 00:25:16.757 "transport": "TCP", 00:25:16.757 "trtype": "TCP", 00:25:16.757 "adrfam": "IPv4", 00:25:16.757 "traddr": "10.0.0.2", 00:25:16.757 "trsvcid": "4420" 00:25:16.757 } 00:25:16.757 ], 00:25:16.757 "allow_any_host": true, 00:25:16.757 "hosts": [], 00:25:16.757 "serial_number": "SPDK00000000000001", 00:25:16.757 "model_number": "SPDK bdev Controller", 00:25:16.757 "max_namespaces": 32, 00:25:16.757 "min_cntlid": 1, 00:25:16.757 "max_cntlid": 65519, 00:25:16.757 "namespaces": [ 00:25:16.757 { 00:25:16.757 "nsid": 1, 00:25:16.757 "bdev_name": "Malloc0", 00:25:16.757 "name": "Malloc0", 00:25:16.757 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:16.757 "eui64": "ABCDEF0123456789", 00:25:16.757 "uuid": "6e80859f-6027-4883-921d-a5e1ab43a749" 00:25:16.757 } 00:25:16.757 ] 00:25:16.757 } 00:25:16.757 ] 00:25:16.757 09:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.757 09:53:39 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:16.757 [2024-04-18 09:53:39.988432] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:16.757 [2024-04-18 09:53:39.988546] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74885 ] 00:25:17.018 [2024-04-18 09:53:40.158780] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:17.018 [2024-04-18 09:53:40.158960] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:17.018 [2024-04-18 09:53:40.158975] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:17.018 [2024-04-18 09:53:40.159004] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:17.018 [2024-04-18 09:53:40.159026] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:25:17.018 [2024-04-18 09:53:40.159222] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:17.019 [2024-04-18 09:53:40.159305] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:25:17.019 [2024-04-18 09:53:40.174152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:17.019 [2024-04-18 09:53:40.174192] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:17.019 [2024-04-18 09:53:40.174204] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:17.019 [2024-04-18 09:53:40.174212] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:17.019 [2024-04-18 09:53:40.174312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.174332] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.174341] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.174368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:17.019 [2024-04-18 09:53:40.174411] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.182165] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.182197] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.182205] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182215] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.182237] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:17.019 [2024-04-18 09:53:40.182256] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:17.019 [2024-04-18 09:53:40.182266] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:17.019 [2024-04-18 09:53:40.182299] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182309] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.182335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.182372] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.182505] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.182525] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.182533] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182542] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.182553] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:17.019 [2024-04-18 09:53:40.182571] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:17.019 [2024-04-18 09:53:40.182586] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182594] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182602] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.182621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.182650] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.182726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.182738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.182747] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182755] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.182766] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:17.019 [2024-04-18 09:53:40.182781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:17.019 [2024-04-18 09:53:40.182805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182818] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182826] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.182840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.182866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.182947] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.182963] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.182970] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.182977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.182988] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:17.019 [2024-04-18 09:53:40.183009] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183018] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183026] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.183040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.183071] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.183163] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.183177] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.183183] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183193] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.183204] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:17.019 [2024-04-18 09:53:40.183214] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:17.019 [2024-04-18 09:53:40.183228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:17.019 [2024-04-18 09:53:40.183342] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:17.019 [2024-04-18 09:53:40.183351] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:17.019 [2024-04-18 09:53:40.183368] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.183398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.183433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.183507] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.183519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.183528] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183535] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.183545] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:17.019 [2024-04-18 09:53:40.183563] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183572] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183579] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.183594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.183619] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.183703] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.019 [2024-04-18 09:53:40.183714] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.019 [2024-04-18 09:53:40.183720] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183727] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.019 [2024-04-18 09:53:40.183736] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:17.019 [2024-04-18 09:53:40.183749] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:17.019 [2024-04-18 09:53:40.183787] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:17.019 [2024-04-18 09:53:40.183812] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:17.019 [2024-04-18 09:53:40.183836] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.183845] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.019 [2024-04-18 09:53:40.183860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.019 [2024-04-18 09:53:40.183890] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.019 [2024-04-18 09:53:40.184031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.019 [2024-04-18 09:53:40.184056] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.019 [2024-04-18 09:53:40.184063] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.184072] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:25:17.019 [2024-04-18 09:53:40.184082] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.019 [2024-04-18 09:53:40.184090] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.184110] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.019 [2024-04-18 09:53:40.184120] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.020 [2024-04-18 09:53:40.184164] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.020 [2024-04-18 09:53:40.184172] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184178] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.020 [2024-04-18 09:53:40.184198] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:17.020 [2024-04-18 09:53:40.184212] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:17.020 [2024-04-18 09:53:40.184220] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:17.020 [2024-04-18 09:53:40.184233] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:17.020 [2024-04-18 09:53:40.184242] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:17.020 [2024-04-18 09:53:40.184251] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:17.020 [2024-04-18 09:53:40.184269] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:17.020 [2024-04-18 09:53:40.184285] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184294] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184302] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.184320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:17.020 [2024-04-18 09:53:40.184351] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.020 [2024-04-18 09:53:40.184458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.020 [2024-04-18 09:53:40.184480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.020 [2024-04-18 09:53:40.184488] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184499] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.020 [2024-04-18 09:53:40.184513] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184522] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184541] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.184558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.020 [2024-04-18 09:53:40.184571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184578] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184584] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.184595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.020 [2024-04-18 09:53:40.184605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184617] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.184633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.020 [2024-04-18 09:53:40.184644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184650] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184656] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.184667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.020 [2024-04-18 09:53:40.184676] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:17.020 [2024-04-18 09:53:40.184694] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:17.020 [2024-04-18 09:53:40.184707] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.184732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.020 [2024-04-18 09:53:40.184764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.020 [2024-04-18 09:53:40.184775] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:25:17.020 [2024-04-18 09:53:40.184787] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:25:17.020 [2024-04-18 09:53:40.184796] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.020 [2024-04-18 09:53:40.184804] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.020 [2024-04-18 09:53:40.184954] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.020 [2024-04-18 09:53:40.184967] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.020 [2024-04-18 09:53:40.184973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.184980] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.020 [2024-04-18 09:53:40.184995] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:17.020 [2024-04-18 09:53:40.185006] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:17.020 [2024-04-18 09:53:40.185030] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185039] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.185053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.020 [2024-04-18 09:53:40.185084] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.020 [2024-04-18 09:53:40.185196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.020 [2024-04-18 09:53:40.185210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.020 [2024-04-18 09:53:40.185222] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185230] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:25:17.020 [2024-04-18 09:53:40.185240] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.020 [2024-04-18 09:53:40.185248] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185261] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185269] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185290] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.020 [2024-04-18 09:53:40.185301] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.020 [2024-04-18 09:53:40.185307] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.020 [2024-04-18 09:53:40.185344] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:17.020 [2024-04-18 09:53:40.185400] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185418] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.185434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.020 [2024-04-18 09:53:40.185446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185454] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.185462] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.185480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.020 [2024-04-18 09:53:40.185521] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.020 [2024-04-18 09:53:40.185533] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:25:17.020 [2024-04-18 09:53:40.185968] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.020 [2024-04-18 09:53:40.185995] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.020 [2024-04-18 09:53:40.186003] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186011] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:25:17.020 [2024-04-18 09:53:40.186025] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:25:17.020 [2024-04-18 09:53:40.186033] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186053] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186061] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.020 [2024-04-18 09:53:40.186085] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.020 [2024-04-18 09:53:40.186091] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186098] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:25:17.020 [2024-04-18 09:53:40.186154] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.020 [2024-04-18 09:53:40.186169] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.020 [2024-04-18 09:53:40.186176] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186183] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.020 [2024-04-18 09:53:40.186213] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.020 [2024-04-18 09:53:40.186223] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.020 [2024-04-18 09:53:40.186243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.020 [2024-04-18 09:53:40.186283] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.021 [2024-04-18 09:53:40.186413] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.021 [2024-04-18 09:53:40.186426] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.021 [2024-04-18 09:53:40.186432] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186439] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:25:17.021 [2024-04-18 09:53:40.186472] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:25:17.021 [2024-04-18 09:53:40.186481] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186493] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186500] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.021 [2024-04-18 09:53:40.186527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.021 [2024-04-18 09:53:40.186534] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186541] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.021 [2024-04-18 09:53:40.186562] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186575] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.021 [2024-04-18 09:53:40.186590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.021 [2024-04-18 09:53:40.186626] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.021 [2024-04-18 09:53:40.186752] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.021 [2024-04-18 09:53:40.186764] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.021 [2024-04-18 09:53:40.186771] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186778] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:25:17.021 [2024-04-18 09:53:40.186795] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:25:17.021 [2024-04-18 09:53:40.186803] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186823] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186831] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186854] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.021 [2024-04-18 09:53:40.186867] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.021 [2024-04-18 09:53:40.186873] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.021 [2024-04-18 09:53:40.186880] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.021 ===================================================== 00:25:17.021 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:17.021 ===================================================== 00:25:17.021 Controller Capabilities/Features 00:25:17.021 ================================ 00:25:17.021 Vendor ID: 0000 00:25:17.021 Subsystem Vendor ID: 0000 00:25:17.021 Serial Number: .................... 00:25:17.021 Model Number: ........................................ 00:25:17.021 Firmware Version: 24.05 00:25:17.021 Recommended Arb Burst: 0 00:25:17.021 IEEE OUI Identifier: 00 00 00 00:25:17.021 Multi-path I/O 00:25:17.021 May have multiple subsystem ports: No 00:25:17.021 May have multiple controllers: No 00:25:17.021 Associated with SR-IOV VF: No 00:25:17.021 Max Data Transfer Size: 131072 00:25:17.021 Max Number of Namespaces: 0 00:25:17.021 Max Number of I/O Queues: 1024 00:25:17.021 NVMe Specification Version (VS): 1.3 00:25:17.021 NVMe Specification Version (Identify): 1.3 00:25:17.021 Maximum Queue Entries: 128 00:25:17.021 Contiguous Queues Required: Yes 00:25:17.021 Arbitration Mechanisms Supported 00:25:17.021 Weighted Round Robin: Not Supported 00:25:17.021 Vendor Specific: Not Supported 00:25:17.021 Reset Timeout: 15000 ms 00:25:17.021 Doorbell Stride: 4 bytes 00:25:17.021 NVM Subsystem Reset: Not Supported 00:25:17.021 Command Sets Supported 00:25:17.021 NVM Command Set: Supported 00:25:17.021 Boot Partition: Not Supported 00:25:17.021 Memory Page Size Minimum: 4096 bytes 00:25:17.021 Memory Page Size Maximum: 4096 bytes 00:25:17.021 Persistent Memory Region: Not Supported 00:25:17.021 Optional Asynchronous Events Supported 00:25:17.021 Namespace Attribute Notices: Not Supported 00:25:17.021 Firmware Activation Notices: Not Supported 00:25:17.021 ANA Change Notices: Not Supported 00:25:17.021 PLE Aggregate Log Change Notices: Not Supported 00:25:17.021 LBA Status Info Alert Notices: Not Supported 00:25:17.021 EGE Aggregate Log Change Notices: Not Supported 00:25:17.021 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.021 Zone Descriptor Change Notices: Not Supported 00:25:17.021 Discovery Log Change Notices: Supported 00:25:17.021 Controller Attributes 00:25:17.021 128-bit Host Identifier: Not Supported 00:25:17.021 Non-Operational Permissive Mode: Not Supported 00:25:17.021 NVM Sets: Not Supported 00:25:17.021 Read Recovery Levels: Not Supported 00:25:17.021 Endurance Groups: Not Supported 00:25:17.021 Predictable Latency Mode: Not Supported 00:25:17.021 Traffic Based Keep ALive: Not Supported 00:25:17.021 Namespace Granularity: Not Supported 00:25:17.021 SQ Associations: Not Supported 00:25:17.021 UUID List: Not Supported 00:25:17.021 Multi-Domain Subsystem: Not Supported 00:25:17.021 Fixed Capacity Management: Not Supported 00:25:17.021 Variable Capacity Management: Not Supported 00:25:17.021 Delete Endurance Group: Not Supported 00:25:17.021 Delete NVM Set: Not Supported 00:25:17.021 Extended LBA Formats Supported: Not Supported 00:25:17.021 Flexible Data Placement Supported: Not Supported 00:25:17.021 00:25:17.021 Controller Memory Buffer Support 00:25:17.021 ================================ 00:25:17.021 Supported: No 00:25:17.021 00:25:17.021 Persistent Memory Region Support 00:25:17.021 ================================ 00:25:17.021 Supported: No 00:25:17.021 00:25:17.021 Admin Command Set Attributes 00:25:17.021 ============================ 00:25:17.021 Security Send/Receive: Not Supported 00:25:17.021 Format NVM: Not Supported 00:25:17.021 Firmware Activate/Download: Not Supported 00:25:17.021 Namespace Management: Not Supported 00:25:17.021 Device Self-Test: Not Supported 00:25:17.021 Directives: Not Supported 00:25:17.021 NVMe-MI: Not Supported 00:25:17.021 Virtualization Management: Not Supported 00:25:17.021 Doorbell Buffer Config: Not Supported 00:25:17.021 Get LBA Status Capability: Not Supported 00:25:17.021 Command & Feature Lockdown Capability: Not Supported 00:25:17.021 Abort Command Limit: 1 00:25:17.021 Async Event Request Limit: 4 00:25:17.021 Number of Firmware Slots: N/A 00:25:17.021 Firmware Slot 1 Read-Only: N/A 00:25:17.021 Firmware Activation Without Reset: N/A 00:25:17.021 Multiple Update Detection Support: N/A 00:25:17.021 Firmware Update Granularity: No Information Provided 00:25:17.021 Per-Namespace SMART Log: No 00:25:17.021 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.021 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:17.021 Command Effects Log Page: Not Supported 00:25:17.021 Get Log Page Extended Data: Supported 00:25:17.021 Telemetry Log Pages: Not Supported 00:25:17.021 Persistent Event Log Pages: Not Supported 00:25:17.021 Supported Log Pages Log Page: May Support 00:25:17.021 Commands Supported & Effects Log Page: Not Supported 00:25:17.021 Feature Identifiers & Effects Log Page:May Support 00:25:17.021 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.021 Data Area 4 for Telemetry Log: Not Supported 00:25:17.021 Error Log Page Entries Supported: 128 00:25:17.021 Keep Alive: Not Supported 00:25:17.021 00:25:17.021 NVM Command Set Attributes 00:25:17.021 ========================== 00:25:17.021 Submission Queue Entry Size 00:25:17.021 Max: 1 00:25:17.021 Min: 1 00:25:17.021 Completion Queue Entry Size 00:25:17.021 Max: 1 00:25:17.021 Min: 1 00:25:17.021 Number of Namespaces: 0 00:25:17.021 Compare Command: Not Supported 00:25:17.021 Write Uncorrectable Command: Not Supported 00:25:17.021 Dataset Management Command: Not Supported 00:25:17.021 Write Zeroes Command: Not Supported 00:25:17.021 Set Features Save Field: Not Supported 00:25:17.021 Reservations: Not Supported 00:25:17.021 Timestamp: Not Supported 00:25:17.021 Copy: Not Supported 00:25:17.021 Volatile Write Cache: Not Present 00:25:17.021 Atomic Write Unit (Normal): 1 00:25:17.021 Atomic Write Unit (PFail): 1 00:25:17.021 Atomic Compare & Write Unit: 1 00:25:17.021 Fused Compare & Write: Supported 00:25:17.021 Scatter-Gather List 00:25:17.021 SGL Command Set: Supported 00:25:17.021 SGL Keyed: Supported 00:25:17.021 SGL Bit Bucket Descriptor: Not Supported 00:25:17.021 SGL Metadata Pointer: Not Supported 00:25:17.021 Oversized SGL: Not Supported 00:25:17.021 SGL Metadata Address: Not Supported 00:25:17.021 SGL Offset: Supported 00:25:17.021 Transport SGL Data Block: Not Supported 00:25:17.021 Replay Protected Memory Block: Not Supported 00:25:17.021 00:25:17.021 Firmware Slot Information 00:25:17.021 ========================= 00:25:17.021 Active slot: 0 00:25:17.021 00:25:17.021 00:25:17.021 Error Log 00:25:17.021 ========= 00:25:17.021 00:25:17.022 Active Namespaces 00:25:17.022 ================= 00:25:17.022 Discovery Log Page 00:25:17.022 ================== 00:25:17.022 Generation Counter: 2 00:25:17.022 Number of Records: 2 00:25:17.022 Record Format: 0 00:25:17.022 00:25:17.022 Discovery Log Entry 0 00:25:17.022 ---------------------- 00:25:17.022 Transport Type: 3 (TCP) 00:25:17.022 Address Family: 1 (IPv4) 00:25:17.022 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:17.022 Entry Flags: 00:25:17.022 Duplicate Returned Information: 1 00:25:17.022 Explicit Persistent Connection Support for Discovery: 1 00:25:17.022 Transport Requirements: 00:25:17.022 Secure Channel: Not Required 00:25:17.022 Port ID: 0 (0x0000) 00:25:17.022 Controller ID: 65535 (0xffff) 00:25:17.022 Admin Max SQ Size: 128 00:25:17.022 Transport Service Identifier: 4420 00:25:17.022 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:17.022 Transport Address: 10.0.0.2 00:25:17.022 Discovery Log Entry 1 00:25:17.022 ---------------------- 00:25:17.022 Transport Type: 3 (TCP) 00:25:17.022 Address Family: 1 (IPv4) 00:25:17.022 Subsystem Type: 2 (NVM Subsystem) 00:25:17.022 Entry Flags: 00:25:17.022 Duplicate Returned Information: 0 00:25:17.022 Explicit Persistent Connection Support for Discovery: 0 00:25:17.022 Transport Requirements: 00:25:17.022 Secure Channel: Not Required 00:25:17.022 Port ID: 0 (0x0000) 00:25:17.022 Controller ID: 65535 (0xffff) 00:25:17.022 Admin Max SQ Size: 128 00:25:17.022 Transport Service Identifier: 4420 00:25:17.022 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:17.022 Transport Address: 10.0.0.2 [2024-04-18 09:53:40.187061] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:17.022 [2024-04-18 09:53:40.187090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.022 [2024-04-18 09:53:40.187105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.022 [2024-04-18 09:53:40.187115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.022 [2024-04-18 09:53:40.187141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.022 [2024-04-18 09:53:40.187167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187177] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187184] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.187201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.187241] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.187324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.187337] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.187344] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187352] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.187367] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.187402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.187439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.187563] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.187575] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.187584] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187592] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.187601] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:17.022 [2024-04-18 09:53:40.187611] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:17.022 [2024-04-18 09:53:40.187628] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.187662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.187689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.187764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.187782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.187789] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187800] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.187821] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187829] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187836] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.187849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.187875] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.187950] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.187976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.187984] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.187991] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.188009] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188021] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188029] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.188042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.188068] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.188175] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.188190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.188196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188203] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.188221] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188236] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.188253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.188281] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.188353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.188364] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.188371] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188378] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.188395] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188403] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188409] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.188422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.188446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.188542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.188566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.188574] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188581] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.188599] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188614] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.022 [2024-04-18 09:53:40.188626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.022 [2024-04-18 09:53:40.188652] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.022 [2024-04-18 09:53:40.188724] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.022 [2024-04-18 09:53:40.188735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.022 [2024-04-18 09:53:40.188741] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188748] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.022 [2024-04-18 09:53:40.188779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.022 [2024-04-18 09:53:40.188787] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.188794] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.188807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.188830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.188929] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.188946] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.188953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.188964] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.188982] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.188990] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.188997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.189009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.189034] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.189119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.189148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.189155] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189166] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.189186] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189201] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.189217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.189245] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.189333] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.189344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.189351] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189358] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.189379] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189388] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189394] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.189407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.189432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.189523] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.189540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.189547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.189592] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189601] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189608] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.189621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.189657] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.189729] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.189740] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.189746] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189753] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.189771] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189779] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189785] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.189802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.189827] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.189893] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.189905] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.189911] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189918] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.189935] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189943] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.189950] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.189967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.189991] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.190069] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.190089] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.190097] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.190104] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.190121] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.194164] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.194173] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.023 [2024-04-18 09:53:40.194189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.023 [2024-04-18 09:53:40.194223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.023 [2024-04-18 09:53:40.194302] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.023 [2024-04-18 09:53:40.194314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.023 [2024-04-18 09:53:40.194321] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.023 [2024-04-18 09:53:40.194328] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.023 [2024-04-18 09:53:40.194347] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:17.023 00:25:17.023 09:53:40 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:17.023 [2024-04-18 09:53:40.302166] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:17.023 [2024-04-18 09:53:40.302280] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74889 ] 00:25:17.285 [2024-04-18 09:53:40.469670] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:17.285 [2024-04-18 09:53:40.469829] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:17.285 [2024-04-18 09:53:40.469852] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:17.285 [2024-04-18 09:53:40.469883] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:17.285 [2024-04-18 09:53:40.469922] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:25:17.285 [2024-04-18 09:53:40.470111] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:17.285 [2024-04-18 09:53:40.470213] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:25:17.285 [2024-04-18 09:53:40.477168] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:17.285 [2024-04-18 09:53:40.477208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:17.285 [2024-04-18 09:53:40.477219] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:17.285 [2024-04-18 09:53:40.477226] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:17.285 [2024-04-18 09:53:40.477337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.477356] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.477365] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.285 [2024-04-18 09:53:40.477398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:17.285 [2024-04-18 09:53:40.477458] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.285 [2024-04-18 09:53:40.485160] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.285 [2024-04-18 09:53:40.485199] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.285 [2024-04-18 09:53:40.485209] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485219] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.285 [2024-04-18 09:53:40.485248] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:17.285 [2024-04-18 09:53:40.485269] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:17.285 [2024-04-18 09:53:40.485283] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:17.285 [2024-04-18 09:53:40.485314] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.285 [2024-04-18 09:53:40.485372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.285 [2024-04-18 09:53:40.485420] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.285 [2024-04-18 09:53:40.485526] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.285 [2024-04-18 09:53:40.485545] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.285 [2024-04-18 09:53:40.485553] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485561] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.285 [2024-04-18 09:53:40.485573] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:17.285 [2024-04-18 09:53:40.485591] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:17.285 [2024-04-18 09:53:40.485605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485613] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485621] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.285 [2024-04-18 09:53:40.485642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.285 [2024-04-18 09:53:40.485675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.285 [2024-04-18 09:53:40.485760] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.285 [2024-04-18 09:53:40.485773] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.285 [2024-04-18 09:53:40.485779] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485787] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.285 [2024-04-18 09:53:40.485797] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:17.285 [2024-04-18 09:53:40.485824] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:17.285 [2024-04-18 09:53:40.485848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.485868] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.285 [2024-04-18 09:53:40.485883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.285 [2024-04-18 09:53:40.485921] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.285 [2024-04-18 09:53:40.485998] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.285 [2024-04-18 09:53:40.486011] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.285 [2024-04-18 09:53:40.486017] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.285 [2024-04-18 09:53:40.486025] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.285 [2024-04-18 09:53:40.486036] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:17.285 [2024-04-18 09:53:40.486054] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486070] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.486085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.286 [2024-04-18 09:53:40.486115] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.286 [2024-04-18 09:53:40.486189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.286 [2024-04-18 09:53:40.486203] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.286 [2024-04-18 09:53:40.486210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486217] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.286 [2024-04-18 09:53:40.486230] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:17.286 [2024-04-18 09:53:40.486241] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:17.286 [2024-04-18 09:53:40.486261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:17.286 [2024-04-18 09:53:40.486382] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:17.286 [2024-04-18 09:53:40.486391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:17.286 [2024-04-18 09:53:40.486406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486415] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.486438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.286 [2024-04-18 09:53:40.486484] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.286 [2024-04-18 09:53:40.486586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.286 [2024-04-18 09:53:40.486599] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.286 [2024-04-18 09:53:40.486608] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.286 [2024-04-18 09:53:40.486627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:17.286 [2024-04-18 09:53:40.486649] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486666] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.486681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.286 [2024-04-18 09:53:40.486711] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.286 [2024-04-18 09:53:40.486799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.286 [2024-04-18 09:53:40.486811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.286 [2024-04-18 09:53:40.486817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486830] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.286 [2024-04-18 09:53:40.486847] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:17.286 [2024-04-18 09:53:40.486856] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:17.286 [2024-04-18 09:53:40.486899] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:17.286 [2024-04-18 09:53:40.486923] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:17.286 [2024-04-18 09:53:40.486946] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.486955] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.486973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.286 [2024-04-18 09:53:40.487007] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.286 [2024-04-18 09:53:40.487178] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.286 [2024-04-18 09:53:40.487193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.286 [2024-04-18 09:53:40.487199] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487208] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:25:17.286 [2024-04-18 09:53:40.487220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.286 [2024-04-18 09:53:40.487229] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487245] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487253] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487268] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.286 [2024-04-18 09:53:40.487283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.286 [2024-04-18 09:53:40.487291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.286 [2024-04-18 09:53:40.487318] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:17.286 [2024-04-18 09:53:40.487328] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:17.286 [2024-04-18 09:53:40.487344] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:17.286 [2024-04-18 09:53:40.487360] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:17.286 [2024-04-18 09:53:40.487369] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:17.286 [2024-04-18 09:53:40.487378] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:17.286 [2024-04-18 09:53:40.487393] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:17.286 [2024-04-18 09:53:40.487409] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487435] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.487452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:17.286 [2024-04-18 09:53:40.487488] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.286 [2024-04-18 09:53:40.487604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.286 [2024-04-18 09:53:40.487617] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.286 [2024-04-18 09:53:40.487627] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487634] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:25:17.286 [2024-04-18 09:53:40.487648] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487657] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487668] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.487686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.286 [2024-04-18 09:53:40.487699] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487706] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487717] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.487729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.286 [2024-04-18 09:53:40.487739] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487745] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.487766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.286 [2024-04-18 09:53:40.487776] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487789] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.487799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.286 [2024-04-18 09:53:40.487809] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:17.286 [2024-04-18 09:53:40.487827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:17.286 [2024-04-18 09:53:40.487840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.286 [2024-04-18 09:53:40.487848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.286 [2024-04-18 09:53:40.487861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.287 [2024-04-18 09:53:40.487916] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:25:17.287 [2024-04-18 09:53:40.487938] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:25:17.287 [2024-04-18 09:53:40.487947] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:25:17.287 [2024-04-18 09:53:40.487955] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.287 [2024-04-18 09:53:40.487963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.287 [2024-04-18 09:53:40.488095] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.287 [2024-04-18 09:53:40.488107] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.287 [2024-04-18 09:53:40.488113] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488120] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.287 [2024-04-18 09:53:40.488156] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:17.287 [2024-04-18 09:53:40.488168] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488187] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488199] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488211] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488227] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.287 [2024-04-18 09:53:40.488246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:17.287 [2024-04-18 09:53:40.488279] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.287 [2024-04-18 09:53:40.488358] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.287 [2024-04-18 09:53:40.488371] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.287 [2024-04-18 09:53:40.488378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488384] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.287 [2024-04-18 09:53:40.488471] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488494] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488512] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488533] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.287 [2024-04-18 09:53:40.488551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.287 [2024-04-18 09:53:40.488583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.287 [2024-04-18 09:53:40.488686] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.287 [2024-04-18 09:53:40.488698] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.287 [2024-04-18 09:53:40.488705] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488712] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:25:17.287 [2024-04-18 09:53:40.488720] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.287 [2024-04-18 09:53:40.488727] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488739] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488746] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488763] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.287 [2024-04-18 09:53:40.488773] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.287 [2024-04-18 09:53:40.488780] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488787] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.287 [2024-04-18 09:53:40.488823] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:17.287 [2024-04-18 09:53:40.488850] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488889] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.488917] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.488926] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.287 [2024-04-18 09:53:40.488944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.287 [2024-04-18 09:53:40.488976] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.287 [2024-04-18 09:53:40.489107] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.287 [2024-04-18 09:53:40.489120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.287 [2024-04-18 09:53:40.493156] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493187] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:25:17.287 [2024-04-18 09:53:40.493197] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.287 [2024-04-18 09:53:40.493205] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493228] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493237] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493253] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.287 [2024-04-18 09:53:40.493264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.287 [2024-04-18 09:53:40.493271] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493278] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.287 [2024-04-18 09:53:40.493321] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493349] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493373] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.287 [2024-04-18 09:53:40.493401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.287 [2024-04-18 09:53:40.493440] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.287 [2024-04-18 09:53:40.493579] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.287 [2024-04-18 09:53:40.493592] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.287 [2024-04-18 09:53:40.493598] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493605] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:25:17.287 [2024-04-18 09:53:40.493613] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.287 [2024-04-18 09:53:40.493620] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493635] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493643] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493668] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.287 [2024-04-18 09:53:40.493679] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.287 [2024-04-18 09:53:40.493685] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493693] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.287 [2024-04-18 09:53:40.493730] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493787] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493796] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:17.287 [2024-04-18 09:53:40.493805] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:17.287 [2024-04-18 09:53:40.493814] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:17.287 [2024-04-18 09:53:40.493857] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493867] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.287 [2024-04-18 09:53:40.493893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.287 [2024-04-18 09:53:40.493917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493936] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.287 [2024-04-18 09:53:40.493943] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.493955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.288 [2024-04-18 09:53:40.493991] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.288 [2024-04-18 09:53:40.494010] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:25:17.288 [2024-04-18 09:53:40.494093] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.494110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.494117] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494146] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.494160] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.494170] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.494177] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494183] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.494206] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:25:17.288 [2024-04-18 09:53:40.494341] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.494353] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.494360] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.494384] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494392] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494438] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:25:17.288 [2024-04-18 09:53:40.494547] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.494560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.494566] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494573] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.494590] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494597] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494643] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:25:17.288 [2024-04-18 09:53:40.494719] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.494736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.494743] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494750] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.494783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494793] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494822] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494841] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494929] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.494941] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:25:17.288 [2024-04-18 09:53:40.494952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.288 [2024-04-18 09:53:40.494984] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:25:17.288 [2024-04-18 09:53:40.494997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:25:17.288 [2024-04-18 09:53:40.495005] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:25:17.288 [2024-04-18 09:53:40.495013] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:25:17.288 [2024-04-18 09:53:40.495217] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.288 [2024-04-18 09:53:40.495233] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.288 [2024-04-18 09:53:40.495241] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495252] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:25:17.288 [2024-04-18 09:53:40.495262] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:25:17.288 [2024-04-18 09:53:40.495271] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495299] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495309] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495319] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.288 [2024-04-18 09:53:40.495328] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.288 [2024-04-18 09:53:40.495334] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495340] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:25:17.288 [2024-04-18 09:53:40.495354] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:25:17.288 [2024-04-18 09:53:40.495361] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495377] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495384] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.288 [2024-04-18 09:53:40.495403] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.288 [2024-04-18 09:53:40.495409] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495415] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:25:17.288 [2024-04-18 09:53:40.495423] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:25:17.288 [2024-04-18 09:53:40.495430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495457] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495464] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495473] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.288 [2024-04-18 09:53:40.495481] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.288 [2024-04-18 09:53:40.495487] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495493] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:25:17.288 [2024-04-18 09:53:40.495500] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:25:17.288 [2024-04-18 09:53:40.495507] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495521] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495527] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495548] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.495560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.495567] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495577] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.495606] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.495617] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.495623] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.495650] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.495660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.495666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.288 [2024-04-18 09:53:40.495672] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:25:17.288 [2024-04-18 09:53:40.495692] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.288 [2024-04-18 09:53:40.495702] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.288 [2024-04-18 09:53:40.495709] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.289 [2024-04-18 09:53:40.495715] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:25:17.289 ===================================================== 00:25:17.289 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.289 ===================================================== 00:25:17.289 Controller Capabilities/Features 00:25:17.289 ================================ 00:25:17.289 Vendor ID: 8086 00:25:17.289 Subsystem Vendor ID: 8086 00:25:17.289 Serial Number: SPDK00000000000001 00:25:17.289 Model Number: SPDK bdev Controller 00:25:17.289 Firmware Version: 24.05 00:25:17.289 Recommended Arb Burst: 6 00:25:17.289 IEEE OUI Identifier: e4 d2 5c 00:25:17.289 Multi-path I/O 00:25:17.289 May have multiple subsystem ports: Yes 00:25:17.289 May have multiple controllers: Yes 00:25:17.289 Associated with SR-IOV VF: No 00:25:17.289 Max Data Transfer Size: 131072 00:25:17.289 Max Number of Namespaces: 32 00:25:17.289 Max Number of I/O Queues: 127 00:25:17.289 NVMe Specification Version (VS): 1.3 00:25:17.289 NVMe Specification Version (Identify): 1.3 00:25:17.289 Maximum Queue Entries: 128 00:25:17.289 Contiguous Queues Required: Yes 00:25:17.289 Arbitration Mechanisms Supported 00:25:17.289 Weighted Round Robin: Not Supported 00:25:17.289 Vendor Specific: Not Supported 00:25:17.289 Reset Timeout: 15000 ms 00:25:17.289 Doorbell Stride: 4 bytes 00:25:17.289 NVM Subsystem Reset: Not Supported 00:25:17.289 Command Sets Supported 00:25:17.289 NVM Command Set: Supported 00:25:17.289 Boot Partition: Not Supported 00:25:17.289 Memory Page Size Minimum: 4096 bytes 00:25:17.289 Memory Page Size Maximum: 4096 bytes 00:25:17.289 Persistent Memory Region: Not Supported 00:25:17.289 Optional Asynchronous Events Supported 00:25:17.289 Namespace Attribute Notices: Supported 00:25:17.289 Firmware Activation Notices: Not Supported 00:25:17.289 ANA Change Notices: Not Supported 00:25:17.289 PLE Aggregate Log Change Notices: Not Supported 00:25:17.289 LBA Status Info Alert Notices: Not Supported 00:25:17.289 EGE Aggregate Log Change Notices: Not Supported 00:25:17.289 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.289 Zone Descriptor Change Notices: Not Supported 00:25:17.289 Discovery Log Change Notices: Not Supported 00:25:17.289 Controller Attributes 00:25:17.289 128-bit Host Identifier: Supported 00:25:17.289 Non-Operational Permissive Mode: Not Supported 00:25:17.289 NVM Sets: Not Supported 00:25:17.289 Read Recovery Levels: Not Supported 00:25:17.289 Endurance Groups: Not Supported 00:25:17.289 Predictable Latency Mode: Not Supported 00:25:17.289 Traffic Based Keep ALive: Not Supported 00:25:17.289 Namespace Granularity: Not Supported 00:25:17.289 SQ Associations: Not Supported 00:25:17.289 UUID List: Not Supported 00:25:17.289 Multi-Domain Subsystem: Not Supported 00:25:17.289 Fixed Capacity Management: Not Supported 00:25:17.289 Variable Capacity Management: Not Supported 00:25:17.289 Delete Endurance Group: Not Supported 00:25:17.289 Delete NVM Set: Not Supported 00:25:17.289 Extended LBA Formats Supported: Not Supported 00:25:17.289 Flexible Data Placement Supported: Not Supported 00:25:17.289 00:25:17.289 Controller Memory Buffer Support 00:25:17.289 ================================ 00:25:17.289 Supported: No 00:25:17.289 00:25:17.289 Persistent Memory Region Support 00:25:17.289 ================================ 00:25:17.289 Supported: No 00:25:17.289 00:25:17.289 Admin Command Set Attributes 00:25:17.289 ============================ 00:25:17.289 Security Send/Receive: Not Supported 00:25:17.289 Format NVM: Not Supported 00:25:17.289 Firmware Activate/Download: Not Supported 00:25:17.289 Namespace Management: Not Supported 00:25:17.289 Device Self-Test: Not Supported 00:25:17.289 Directives: Not Supported 00:25:17.289 NVMe-MI: Not Supported 00:25:17.289 Virtualization Management: Not Supported 00:25:17.289 Doorbell Buffer Config: Not Supported 00:25:17.289 Get LBA Status Capability: Not Supported 00:25:17.289 Command & Feature Lockdown Capability: Not Supported 00:25:17.289 Abort Command Limit: 4 00:25:17.289 Async Event Request Limit: 4 00:25:17.289 Number of Firmware Slots: N/A 00:25:17.289 Firmware Slot 1 Read-Only: N/A 00:25:17.289 Firmware Activation Without Reset: N/A 00:25:17.289 Multiple Update Detection Support: N/A 00:25:17.289 Firmware Update Granularity: No Information Provided 00:25:17.289 Per-Namespace SMART Log: No 00:25:17.289 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.289 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:17.289 Command Effects Log Page: Supported 00:25:17.289 Get Log Page Extended Data: Supported 00:25:17.289 Telemetry Log Pages: Not Supported 00:25:17.289 Persistent Event Log Pages: Not Supported 00:25:17.289 Supported Log Pages Log Page: May Support 00:25:17.289 Commands Supported & Effects Log Page: Not Supported 00:25:17.289 Feature Identifiers & Effects Log Page:May Support 00:25:17.289 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.289 Data Area 4 for Telemetry Log: Not Supported 00:25:17.289 Error Log Page Entries Supported: 128 00:25:17.289 Keep Alive: Supported 00:25:17.289 Keep Alive Granularity: 10000 ms 00:25:17.289 00:25:17.289 NVM Command Set Attributes 00:25:17.289 ========================== 00:25:17.289 Submission Queue Entry Size 00:25:17.289 Max: 64 00:25:17.289 Min: 64 00:25:17.289 Completion Queue Entry Size 00:25:17.289 Max: 16 00:25:17.289 Min: 16 00:25:17.289 Number of Namespaces: 32 00:25:17.289 Compare Command: Supported 00:25:17.289 Write Uncorrectable Command: Not Supported 00:25:17.289 Dataset Management Command: Supported 00:25:17.289 Write Zeroes Command: Supported 00:25:17.289 Set Features Save Field: Not Supported 00:25:17.289 Reservations: Supported 00:25:17.289 Timestamp: Not Supported 00:25:17.289 Copy: Supported 00:25:17.289 Volatile Write Cache: Present 00:25:17.289 Atomic Write Unit (Normal): 1 00:25:17.289 Atomic Write Unit (PFail): 1 00:25:17.289 Atomic Compare & Write Unit: 1 00:25:17.289 Fused Compare & Write: Supported 00:25:17.289 Scatter-Gather List 00:25:17.289 SGL Command Set: Supported 00:25:17.289 SGL Keyed: Supported 00:25:17.289 SGL Bit Bucket Descriptor: Not Supported 00:25:17.289 SGL Metadata Pointer: Not Supported 00:25:17.289 Oversized SGL: Not Supported 00:25:17.289 SGL Metadata Address: Not Supported 00:25:17.289 SGL Offset: Supported 00:25:17.289 Transport SGL Data Block: Not Supported 00:25:17.289 Replay Protected Memory Block: Not Supported 00:25:17.289 00:25:17.289 Firmware Slot Information 00:25:17.289 ========================= 00:25:17.289 Active slot: 1 00:25:17.289 Slot 1 Firmware Revision: 24.05 00:25:17.289 00:25:17.289 00:25:17.289 Commands Supported and Effects 00:25:17.289 ============================== 00:25:17.289 Admin Commands 00:25:17.289 -------------- 00:25:17.289 Get Log Page (02h): Supported 00:25:17.289 Identify (06h): Supported 00:25:17.289 Abort (08h): Supported 00:25:17.289 Set Features (09h): Supported 00:25:17.289 Get Features (0Ah): Supported 00:25:17.289 Asynchronous Event Request (0Ch): Supported 00:25:17.289 Keep Alive (18h): Supported 00:25:17.289 I/O Commands 00:25:17.289 ------------ 00:25:17.289 Flush (00h): Supported LBA-Change 00:25:17.289 Write (01h): Supported LBA-Change 00:25:17.289 Read (02h): Supported 00:25:17.289 Compare (05h): Supported 00:25:17.289 Write Zeroes (08h): Supported LBA-Change 00:25:17.289 Dataset Management (09h): Supported LBA-Change 00:25:17.289 Copy (19h): Supported LBA-Change 00:25:17.289 Unknown (79h): Supported LBA-Change 00:25:17.289 Unknown (7Ah): Supported 00:25:17.289 00:25:17.289 Error Log 00:25:17.289 ========= 00:25:17.289 00:25:17.289 Arbitration 00:25:17.289 =========== 00:25:17.289 Arbitration Burst: 1 00:25:17.289 00:25:17.289 Power Management 00:25:17.289 ================ 00:25:17.289 Number of Power States: 1 00:25:17.289 Current Power State: Power State #0 00:25:17.289 Power State #0: 00:25:17.289 Max Power: 0.00 W 00:25:17.289 Non-Operational State: Operational 00:25:17.289 Entry Latency: Not Reported 00:25:17.290 Exit Latency: Not Reported 00:25:17.290 Relative Read Throughput: 0 00:25:17.290 Relative Read Latency: 0 00:25:17.290 Relative Write Throughput: 0 00:25:17.290 Relative Write Latency: 0 00:25:17.290 Idle Power: Not Reported 00:25:17.290 Active Power: Not Reported 00:25:17.290 Non-Operational Permissive Mode: Not Supported 00:25:17.290 00:25:17.290 Health Information 00:25:17.290 ================== 00:25:17.290 Critical Warnings: 00:25:17.290 Available Spare Space: OK 00:25:17.290 Temperature: OK 00:25:17.290 Device Reliability: OK 00:25:17.290 Read Only: No 00:25:17.290 Volatile Memory Backup: OK 00:25:17.290 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:17.290 Temperature Threshold: [2024-04-18 09:53:40.495894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.495916] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:25:17.290 [2024-04-18 09:53:40.495932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.290 [2024-04-18 09:53:40.495967] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:25:17.290 [2024-04-18 09:53:40.496057] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.290 [2024-04-18 09:53:40.496081] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.290 [2024-04-18 09:53:40.496089] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496096] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:25:17.290 [2024-04-18 09:53:40.496195] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:17.290 [2024-04-18 09:53:40.496234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.290 [2024-04-18 09:53:40.496259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.290 [2024-04-18 09:53:40.496270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.290 [2024-04-18 09:53:40.496280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.290 [2024-04-18 09:53:40.496295] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496316] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.290 [2024-04-18 09:53:40.496331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.290 [2024-04-18 09:53:40.496369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.290 [2024-04-18 09:53:40.496446] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.290 [2024-04-18 09:53:40.496459] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.290 [2024-04-18 09:53:40.496467] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496474] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.290 [2024-04-18 09:53:40.496493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496502] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496509] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.290 [2024-04-18 09:53:40.496534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.290 [2024-04-18 09:53:40.496583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.290 [2024-04-18 09:53:40.496708] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.290 [2024-04-18 09:53:40.496736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.290 [2024-04-18 09:53:40.496744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.290 [2024-04-18 09:53:40.496761] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:17.290 [2024-04-18 09:53:40.496770] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:17.290 [2024-04-18 09:53:40.496792] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496804] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.496814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.290 [2024-04-18 09:53:40.496828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.290 [2024-04-18 09:53:40.496859] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.290 [2024-04-18 09:53:40.496982] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.290 [2024-04-18 09:53:40.497014] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.290 [2024-04-18 09:53:40.497022] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.497029] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.290 [2024-04-18 09:53:40.497051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.497063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.497070] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.290 [2024-04-18 09:53:40.497084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.290 [2024-04-18 09:53:40.497112] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.290 [2024-04-18 09:53:40.501153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.290 [2024-04-18 09:53:40.501181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.290 [2024-04-18 09:53:40.501190] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.501197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.290 [2024-04-18 09:53:40.501221] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.501230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.501242] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:25:17.290 [2024-04-18 09:53:40.501258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.290 [2024-04-18 09:53:40.501294] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:25:17.290 [2024-04-18 09:53:40.501407] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.290 [2024-04-18 09:53:40.501422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.290 [2024-04-18 09:53:40.501429] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.290 [2024-04-18 09:53:40.501436] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:25:17.290 [2024-04-18 09:53:40.501450] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:25:17.290 0 Kelvin (-273 Celsius) 00:25:17.290 Available Spare: 0% 00:25:17.290 Available Spare Threshold: 0% 00:25:17.290 Life Percentage Used: 0% 00:25:17.290 Data Units Read: 0 00:25:17.290 Data Units Written: 0 00:25:17.290 Host Read Commands: 0 00:25:17.290 Host Write Commands: 0 00:25:17.290 Controller Busy Time: 0 minutes 00:25:17.291 Power Cycles: 0 00:25:17.291 Power On Hours: 0 hours 00:25:17.291 Unsafe Shutdowns: 0 00:25:17.291 Unrecoverable Media Errors: 0 00:25:17.291 Lifetime Error Log Entries: 0 00:25:17.291 Warning Temperature Time: 0 minutes 00:25:17.291 Critical Temperature Time: 0 minutes 00:25:17.291 00:25:17.291 Number of Queues 00:25:17.291 ================ 00:25:17.291 Number of I/O Submission Queues: 127 00:25:17.291 Number of I/O Completion Queues: 127 00:25:17.291 00:25:17.291 Active Namespaces 00:25:17.291 ================= 00:25:17.291 Namespace ID:1 00:25:17.291 Error Recovery Timeout: Unlimited 00:25:17.291 Command Set Identifier: NVM (00h) 00:25:17.291 Deallocate: Supported 00:25:17.291 Deallocated/Unwritten Error: Not Supported 00:25:17.291 Deallocated Read Value: Unknown 00:25:17.291 Deallocate in Write Zeroes: Not Supported 00:25:17.291 Deallocated Guard Field: 0xFFFF 00:25:17.291 Flush: Supported 00:25:17.291 Reservation: Supported 00:25:17.291 Namespace Sharing Capabilities: Multiple Controllers 00:25:17.291 Size (in LBAs): 131072 (0GiB) 00:25:17.291 Capacity (in LBAs): 131072 (0GiB) 00:25:17.291 Utilization (in LBAs): 131072 (0GiB) 00:25:17.291 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:17.291 EUI64: ABCDEF0123456789 00:25:17.291 UUID: 6e80859f-6027-4883-921d-a5e1ab43a749 00:25:17.291 Thin Provisioning: Not Supported 00:25:17.291 Per-NS Atomic Units: Yes 00:25:17.291 Atomic Boundary Size (Normal): 0 00:25:17.291 Atomic Boundary Size (PFail): 0 00:25:17.291 Atomic Boundary Offset: 0 00:25:17.291 Maximum Single Source Range Length: 65535 00:25:17.291 Maximum Copy Length: 65535 00:25:17.291 Maximum Source Range Count: 1 00:25:17.291 NGUID/EUI64 Never Reused: No 00:25:17.291 Namespace Write Protected: No 00:25:17.291 Number of LBA Formats: 1 00:25:17.291 Current LBA Format: LBA Format #00 00:25:17.291 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.291 00:25:17.291 09:53:40 -- host/identify.sh@51 -- # sync 00:25:17.291 09:53:40 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.291 09:53:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.291 09:53:40 -- common/autotest_common.sh@10 -- # set +x 00:25:17.291 09:53:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.291 09:53:40 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:17.291 09:53:40 -- host/identify.sh@56 -- # nvmftestfini 00:25:17.291 09:53:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:17.291 09:53:40 -- nvmf/common.sh@117 -- # sync 00:25:17.291 09:53:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.291 09:53:40 -- nvmf/common.sh@120 -- # set +e 00:25:17.291 09:53:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.291 09:53:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.291 rmmod nvme_tcp 00:25:17.291 rmmod nvme_fabrics 00:25:17.291 rmmod nvme_keyring 00:25:17.291 09:53:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.291 09:53:40 -- nvmf/common.sh@124 -- # set -e 00:25:17.291 09:53:40 -- nvmf/common.sh@125 -- # return 0 00:25:17.291 09:53:40 -- nvmf/common.sh@478 -- # '[' -n 74850 ']' 00:25:17.291 09:53:40 -- nvmf/common.sh@479 -- # killprocess 74850 00:25:17.291 09:53:40 -- common/autotest_common.sh@936 -- # '[' -z 74850 ']' 00:25:17.291 09:53:40 -- common/autotest_common.sh@940 -- # kill -0 74850 00:25:17.291 09:53:40 -- common/autotest_common.sh@941 -- # uname 00:25:17.291 09:53:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.549 09:53:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74850 00:25:17.549 09:53:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:17.549 09:53:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:17.549 killing process with pid 74850 00:25:17.549 09:53:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74850' 00:25:17.549 09:53:40 -- common/autotest_common.sh@955 -- # kill 74850 00:25:17.549 [2024-04-18 09:53:40.699408] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:17.549 09:53:40 -- common/autotest_common.sh@960 -- # wait 74850 00:25:18.938 09:53:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:18.938 09:53:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:18.938 09:53:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:18.938 09:53:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.938 09:53:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.938 09:53:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.938 09:53:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.938 09:53:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.938 09:53:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:18.938 00:25:18.938 real 0m3.975s 00:25:18.938 user 0m10.574s 00:25:18.938 sys 0m0.914s 00:25:18.938 09:53:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:18.938 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:25:18.938 ************************************ 00:25:18.938 END TEST nvmf_identify 00:25:18.938 ************************************ 00:25:18.939 09:53:42 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:18.939 09:53:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:18.939 09:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:18.939 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:25:18.939 ************************************ 00:25:18.939 START TEST nvmf_perf 00:25:18.939 ************************************ 00:25:18.939 09:53:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:18.939 * Looking for test storage... 00:25:18.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:18.939 09:53:42 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.260 09:53:42 -- nvmf/common.sh@7 -- # uname -s 00:25:19.260 09:53:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.260 09:53:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.260 09:53:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.260 09:53:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.260 09:53:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.260 09:53:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.260 09:53:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.260 09:53:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.260 09:53:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.260 09:53:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.260 09:53:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:19.260 09:53:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:19.260 09:53:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.260 09:53:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.260 09:53:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.260 09:53:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.260 09:53:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.260 09:53:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.260 09:53:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.260 09:53:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.260 09:53:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.260 09:53:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.260 09:53:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.260 09:53:42 -- paths/export.sh@5 -- # export PATH 00:25:19.260 09:53:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.260 09:53:42 -- nvmf/common.sh@47 -- # : 0 00:25:19.260 09:53:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.260 09:53:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.260 09:53:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.260 09:53:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.260 09:53:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.260 09:53:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.260 09:53:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.260 09:53:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.260 09:53:42 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:19.260 09:53:42 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:19.260 09:53:42 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.260 09:53:42 -- host/perf.sh@17 -- # nvmftestinit 00:25:19.260 09:53:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:19.260 09:53:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.260 09:53:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:19.260 09:53:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:19.260 09:53:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:19.260 09:53:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.260 09:53:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.260 09:53:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.260 09:53:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:19.260 09:53:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:19.260 09:53:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:19.260 09:53:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:19.261 09:53:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:19.261 09:53:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:19.261 09:53:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.261 09:53:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.261 09:53:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:19.261 09:53:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:19.261 09:53:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:19.261 09:53:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:19.261 09:53:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:19.261 09:53:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.261 09:53:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:19.261 09:53:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:19.261 09:53:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:19.261 09:53:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:19.261 09:53:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:19.261 09:53:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:19.261 Cannot find device "nvmf_tgt_br" 00:25:19.261 09:53:42 -- nvmf/common.sh@155 -- # true 00:25:19.261 09:53:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.261 Cannot find device "nvmf_tgt_br2" 00:25:19.261 09:53:42 -- nvmf/common.sh@156 -- # true 00:25:19.261 09:53:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:19.261 09:53:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:19.261 Cannot find device "nvmf_tgt_br" 00:25:19.261 09:53:42 -- nvmf/common.sh@158 -- # true 00:25:19.261 09:53:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:19.261 Cannot find device "nvmf_tgt_br2" 00:25:19.261 09:53:42 -- nvmf/common.sh@159 -- # true 00:25:19.261 09:53:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:19.261 09:53:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:19.261 09:53:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.261 09:53:42 -- nvmf/common.sh@162 -- # true 00:25:19.261 09:53:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.261 09:53:42 -- nvmf/common.sh@163 -- # true 00:25:19.261 09:53:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:19.261 09:53:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:19.261 09:53:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:19.261 09:53:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:19.261 09:53:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:19.261 09:53:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:19.261 09:53:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:19.261 09:53:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:19.261 09:53:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:19.261 09:53:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:19.261 09:53:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:19.261 09:53:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:19.261 09:53:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:19.261 09:53:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:19.261 09:53:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:19.261 09:53:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:19.261 09:53:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:19.261 09:53:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:19.261 09:53:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.261 09:53:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.261 09:53:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.261 09:53:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.261 09:53:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.261 09:53:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:19.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:25:19.520 00:25:19.520 --- 10.0.0.2 ping statistics --- 00:25:19.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.520 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:25:19.520 09:53:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:19.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:25:19.520 00:25:19.520 --- 10.0.0.3 ping statistics --- 00:25:19.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.520 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:19.520 09:53:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:25:19.520 00:25:19.520 --- 10.0.0.1 ping statistics --- 00:25:19.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.520 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:19.520 09:53:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.520 09:53:42 -- nvmf/common.sh@422 -- # return 0 00:25:19.520 09:53:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:19.520 09:53:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.520 09:53:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:19.520 09:53:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:19.520 09:53:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.520 09:53:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:19.520 09:53:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:19.520 09:53:42 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:19.520 09:53:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:19.520 09:53:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:19.520 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.521 09:53:42 -- nvmf/common.sh@470 -- # nvmfpid=75083 00:25:19.521 09:53:42 -- nvmf/common.sh@471 -- # waitforlisten 75083 00:25:19.521 09:53:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:19.521 09:53:42 -- common/autotest_common.sh@817 -- # '[' -z 75083 ']' 00:25:19.521 09:53:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.521 09:53:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.521 09:53:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.521 09:53:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.521 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.521 [2024-04-18 09:53:42.819668] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:19.521 [2024-04-18 09:53:42.819831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.780 [2024-04-18 09:53:43.002497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.038 [2024-04-18 09:53:43.282751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.038 [2024-04-18 09:53:43.282829] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.038 [2024-04-18 09:53:43.282854] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.038 [2024-04-18 09:53:43.282871] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.038 [2024-04-18 09:53:43.282905] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.038 [2024-04-18 09:53:43.283183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.038 [2024-04-18 09:53:43.283696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.038 [2024-04-18 09:53:43.283993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.038 [2024-04-18 09:53:43.284092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.605 09:53:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:20.605 09:53:43 -- common/autotest_common.sh@850 -- # return 0 00:25:20.605 09:53:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:20.605 09:53:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:20.605 09:53:43 -- common/autotest_common.sh@10 -- # set +x 00:25:20.605 09:53:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.605 09:53:43 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:20.605 09:53:43 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:25:21.171 09:53:44 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:21.171 09:53:44 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:25:21.428 09:53:44 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:25:21.428 09:53:44 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:21.686 09:53:44 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:21.686 09:53:44 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:25:21.686 09:53:44 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:21.686 09:53:44 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:21.686 09:53:44 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:21.944 [2024-04-18 09:53:45.172253] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.944 09:53:45 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.202 09:53:45 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:22.202 09:53:45 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.460 09:53:45 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:22.460 09:53:45 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:22.719 09:53:46 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.977 [2024-04-18 09:53:46.273991] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.977 09:53:46 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:23.236 09:53:46 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:23.236 09:53:46 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:23.236 09:53:46 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:23.236 09:53:46 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:24.610 Initializing NVMe Controllers 00:25:24.610 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:24.610 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:25:24.610 Initialization complete. Launching workers. 00:25:24.610 ======================================================== 00:25:24.610 Latency(us) 00:25:24.610 Device Information : IOPS MiB/s Average min max 00:25:24.610 PCIE (0000:00:10.0) NSID 1 from core 0: 21110.75 82.46 1515.84 398.10 7708.49 00:25:24.610 ======================================================== 00:25:24.610 Total : 21110.75 82.46 1515.84 398.10 7708.49 00:25:24.610 00:25:24.610 09:53:47 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.985 Initializing NVMe Controllers 00:25:25.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.985 Initialization complete. Launching workers. 00:25:25.985 ======================================================== 00:25:25.985 Latency(us) 00:25:25.985 Device Information : IOPS MiB/s Average min max 00:25:25.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2153.69 8.41 463.81 159.01 6224.38 00:25:25.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.75 0.49 8078.32 6050.15 12054.62 00:25:25.985 ======================================================== 00:25:25.985 Total : 2278.44 8.90 880.72 159.01 12054.62 00:25:25.985 00:25:25.985 09:53:49 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.359 Initializing NVMe Controllers 00:25:27.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:27.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:27.359 Initialization complete. Launching workers. 00:25:27.359 ======================================================== 00:25:27.359 Latency(us) 00:25:27.359 Device Information : IOPS MiB/s Average min max 00:25:27.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5602.76 21.89 5721.42 752.27 13380.36 00:25:27.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3920.84 15.32 8214.26 6046.30 17372.38 00:25:27.359 ======================================================== 00:25:27.359 Total : 9523.60 37.20 6747.72 752.27 17372.38 00:25:27.359 00:25:27.359 09:53:50 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:25:27.359 09:53:50 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.661 Initializing NVMe Controllers 00:25:30.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.661 Controller IO queue size 128, less than required. 00:25:30.661 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.661 Controller IO queue size 128, less than required. 00:25:30.661 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.661 Initialization complete. Launching workers. 00:25:30.661 ======================================================== 00:25:30.661 Latency(us) 00:25:30.661 Device Information : IOPS MiB/s Average min max 00:25:30.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1241.55 310.39 104950.02 41512.16 253206.61 00:25:30.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.37 143.34 248215.87 123671.40 515200.62 00:25:30.661 ======================================================== 00:25:30.661 Total : 1814.92 453.73 150210.53 41512.16 515200.62 00:25:30.661 00:25:30.661 09:53:53 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:30.661 No valid NVMe controllers or AIO or URING devices found 00:25:30.919 Initializing NVMe Controllers 00:25:30.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.919 Controller IO queue size 128, less than required. 00:25:30.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.919 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:30.919 Controller IO queue size 128, less than required. 00:25:30.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.919 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:25:30.919 WARNING: Some requested NVMe devices were skipped 00:25:30.919 09:53:54 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:34.203 Initializing NVMe Controllers 00:25:34.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.203 Controller IO queue size 128, less than required. 00:25:34.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.203 Controller IO queue size 128, less than required. 00:25:34.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.203 Initialization complete. Launching workers. 00:25:34.203 00:25:34.203 ==================== 00:25:34.203 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:34.203 TCP transport: 00:25:34.203 polls: 4848 00:25:34.203 idle_polls: 0 00:25:34.203 sock_completions: 4848 00:25:34.203 nvme_completions: 4447 00:25:34.203 submitted_requests: 6614 00:25:34.203 queued_requests: 1 00:25:34.203 00:25:34.203 ==================== 00:25:34.203 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:34.203 TCP transport: 00:25:34.203 polls: 5538 00:25:34.203 idle_polls: 0 00:25:34.203 sock_completions: 5538 00:25:34.203 nvme_completions: 4791 00:25:34.203 submitted_requests: 7146 00:25:34.203 queued_requests: 1 00:25:34.203 ======================================================== 00:25:34.203 Latency(us) 00:25:34.203 Device Information : IOPS MiB/s Average min max 00:25:34.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1111.28 277.82 122486.71 53888.41 420863.33 00:25:34.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1197.26 299.31 107808.09 52493.26 354513.21 00:25:34.203 ======================================================== 00:25:34.203 Total : 2308.54 577.13 114874.04 52493.26 420863.33 00:25:34.203 00:25:34.203 09:53:57 -- host/perf.sh@66 -- # sync 00:25:34.203 09:53:57 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.203 09:53:57 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:34.203 09:53:57 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:34.203 09:53:57 -- host/perf.sh@114 -- # nvmftestfini 00:25:34.203 09:53:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:34.203 09:53:57 -- nvmf/common.sh@117 -- # sync 00:25:34.203 09:53:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.203 09:53:57 -- nvmf/common.sh@120 -- # set +e 00:25:34.203 09:53:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.203 09:53:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.203 rmmod nvme_tcp 00:25:34.203 rmmod nvme_fabrics 00:25:34.203 rmmod nvme_keyring 00:25:34.203 09:53:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.203 09:53:57 -- nvmf/common.sh@124 -- # set -e 00:25:34.203 09:53:57 -- nvmf/common.sh@125 -- # return 0 00:25:34.203 09:53:57 -- nvmf/common.sh@478 -- # '[' -n 75083 ']' 00:25:34.203 09:53:57 -- nvmf/common.sh@479 -- # killprocess 75083 00:25:34.203 09:53:57 -- common/autotest_common.sh@936 -- # '[' -z 75083 ']' 00:25:34.203 09:53:57 -- common/autotest_common.sh@940 -- # kill -0 75083 00:25:34.203 09:53:57 -- common/autotest_common.sh@941 -- # uname 00:25:34.203 09:53:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:34.203 09:53:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75083 00:25:34.203 killing process with pid 75083 00:25:34.203 09:53:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:34.203 09:53:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:34.203 09:53:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75083' 00:25:34.203 09:53:57 -- common/autotest_common.sh@955 -- # kill 75083 00:25:34.203 09:53:57 -- common/autotest_common.sh@960 -- # wait 75083 00:25:36.108 09:53:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:36.108 09:53:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:36.108 09:53:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:36.108 09:53:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.108 09:53:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.108 09:53:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.108 09:53:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.108 09:53:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.108 09:53:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:36.108 00:25:36.108 real 0m16.838s 00:25:36.108 user 1m1.152s 00:25:36.108 sys 0m4.369s 00:25:36.108 09:53:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:36.108 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.108 ************************************ 00:25:36.108 END TEST nvmf_perf 00:25:36.108 ************************************ 00:25:36.108 09:53:59 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:36.108 09:53:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:36.108 09:53:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.108 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.108 ************************************ 00:25:36.108 START TEST nvmf_fio_host 00:25:36.108 ************************************ 00:25:36.108 09:53:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:36.108 * Looking for test storage... 00:25:36.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:36.108 09:53:59 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.108 09:53:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.108 09:53:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.108 09:53:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.108 09:53:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- paths/export.sh@5 -- # export PATH 00:25:36.108 09:53:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:36.108 09:53:59 -- nvmf/common.sh@7 -- # uname -s 00:25:36.108 09:53:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.108 09:53:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.108 09:53:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.108 09:53:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.108 09:53:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.108 09:53:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.108 09:53:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.108 09:53:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.108 09:53:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.108 09:53:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.108 09:53:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:36.108 09:53:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:36.108 09:53:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.108 09:53:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.108 09:53:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:36.108 09:53:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.108 09:53:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.108 09:53:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.108 09:53:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.108 09:53:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.108 09:53:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.108 09:53:59 -- paths/export.sh@5 -- # export PATH 00:25:36.109 09:53:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.109 09:53:59 -- nvmf/common.sh@47 -- # : 0 00:25:36.109 09:53:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.109 09:53:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.109 09:53:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.109 09:53:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.109 09:53:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.109 09:53:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.109 09:53:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.109 09:53:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.109 09:53:59 -- host/fio.sh@12 -- # nvmftestinit 00:25:36.109 09:53:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:36.109 09:53:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.109 09:53:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:36.109 09:53:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:36.109 09:53:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:36.109 09:53:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.109 09:53:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.109 09:53:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.109 09:53:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:36.109 09:53:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:36.109 09:53:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:36.109 09:53:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:36.109 09:53:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:36.109 09:53:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:36.109 09:53:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.109 09:53:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.109 09:53:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:36.109 09:53:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:36.109 09:53:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:36.109 09:53:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:36.109 09:53:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:36.109 09:53:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.109 09:53:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:36.109 09:53:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:36.109 09:53:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:36.109 09:53:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:36.109 09:53:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:36.109 09:53:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:36.109 Cannot find device "nvmf_tgt_br" 00:25:36.109 09:53:59 -- nvmf/common.sh@155 -- # true 00:25:36.109 09:53:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.109 Cannot find device "nvmf_tgt_br2" 00:25:36.109 09:53:59 -- nvmf/common.sh@156 -- # true 00:25:36.109 09:53:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:36.109 09:53:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:36.109 Cannot find device "nvmf_tgt_br" 00:25:36.109 09:53:59 -- nvmf/common.sh@158 -- # true 00:25:36.109 09:53:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:36.109 Cannot find device "nvmf_tgt_br2" 00:25:36.109 09:53:59 -- nvmf/common.sh@159 -- # true 00:25:36.109 09:53:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:36.109 09:53:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:36.109 09:53:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.109 09:53:59 -- nvmf/common.sh@162 -- # true 00:25:36.109 09:53:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.109 09:53:59 -- nvmf/common.sh@163 -- # true 00:25:36.109 09:53:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.109 09:53:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.109 09:53:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.109 09:53:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.109 09:53:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.109 09:53:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.109 09:53:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.109 09:53:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:36.109 09:53:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:36.109 09:53:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:36.109 09:53:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:36.109 09:53:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:36.109 09:53:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:36.368 09:53:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.368 09:53:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.368 09:53:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.368 09:53:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:36.368 09:53:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:36.368 09:53:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.368 09:53:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.368 09:53:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.368 09:53:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.368 09:53:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.368 09:53:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:36.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:25:36.368 00:25:36.368 --- 10.0.0.2 ping statistics --- 00:25:36.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.368 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:36.368 09:53:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:36.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:25:36.368 00:25:36.368 --- 10.0.0.3 ping statistics --- 00:25:36.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.368 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:36.368 09:53:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:36.368 00:25:36.368 --- 10.0.0.1 ping statistics --- 00:25:36.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.368 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:36.368 09:53:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.368 09:53:59 -- nvmf/common.sh@422 -- # return 0 00:25:36.368 09:53:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:36.368 09:53:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.368 09:53:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:36.368 09:53:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:36.368 09:53:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.368 09:53:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:36.368 09:53:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:36.368 09:53:59 -- host/fio.sh@14 -- # [[ y != y ]] 00:25:36.368 09:53:59 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:25:36.368 09:53:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:36.368 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.368 09:53:59 -- host/fio.sh@22 -- # nvmfpid=75525 00:25:36.368 09:53:59 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:36.368 09:53:59 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.368 09:53:59 -- host/fio.sh@26 -- # waitforlisten 75525 00:25:36.368 09:53:59 -- common/autotest_common.sh@817 -- # '[' -z 75525 ']' 00:25:36.368 09:53:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.368 09:53:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:36.368 09:53:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.368 09:53:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:36.368 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.368 [2024-04-18 09:53:59.735028] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:36.368 [2024-04-18 09:53:59.735188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.626 [2024-04-18 09:53:59.904541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.885 [2024-04-18 09:54:00.156993] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.885 [2024-04-18 09:54:00.157056] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.885 [2024-04-18 09:54:00.157077] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.885 [2024-04-18 09:54:00.157092] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.885 [2024-04-18 09:54:00.157105] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.885 [2024-04-18 09:54:00.157302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.885 [2024-04-18 09:54:00.157827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.885 [2024-04-18 09:54:00.157738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.885 [2024-04-18 09:54:00.157827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.452 09:54:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:37.452 09:54:00 -- common/autotest_common.sh@850 -- # return 0 00:25:37.452 09:54:00 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.452 09:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.452 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 [2024-04-18 09:54:00.710056] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.452 09:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.452 09:54:00 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:25:37.452 09:54:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:37.452 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 09:54:00 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:37.452 09:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.452 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 Malloc1 00:25:37.452 09:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.452 09:54:00 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.452 09:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.452 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.710 09:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.710 09:54:00 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:37.710 09:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.710 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.710 09:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.710 09:54:00 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.710 09:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.710 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.710 [2024-04-18 09:54:00.866208] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.710 09:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.710 09:54:00 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:37.710 09:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.710 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.710 09:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.710 09:54:00 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:37.710 09:54:00 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:37.710 09:54:00 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:37.710 09:54:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:37.710 09:54:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:37.710 09:54:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:37.710 09:54:00 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:37.710 09:54:00 -- common/autotest_common.sh@1327 -- # shift 00:25:37.710 09:54:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:37.710 09:54:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.711 09:54:00 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:37.711 09:54:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:37.711 09:54:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:37.711 09:54:00 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:37.711 09:54:00 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:37.711 09:54:00 -- common/autotest_common.sh@1333 -- # break 00:25:37.711 09:54:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:37.711 09:54:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:37.711 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:37.711 fio-3.35 00:25:37.711 Starting 1 thread 00:25:40.237 00:25:40.237 test: (groupid=0, jobs=1): err= 0: pid=75578: Thu Apr 18 09:54:03 2024 00:25:40.238 read: IOPS=6442, BW=25.2MiB/s (26.4MB/s)(50.6MiB/2009msec) 00:25:40.238 slat (usec): min=2, max=370, avg= 3.40, stdev= 4.23 00:25:40.238 clat (usec): min=3088, max=18134, avg=10318.78, stdev=743.43 00:25:40.238 lat (usec): min=3116, max=18137, avg=10322.18, stdev=742.94 00:25:40.238 clat percentiles (usec): 00:25:40.238 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9765], 00:25:40.238 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:25:40.238 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:25:40.238 | 99.00th=[11863], 99.50th=[12125], 99.90th=[16909], 99.95th=[17957], 00:25:40.238 | 99.99th=[18220] 00:25:40.238 bw ( KiB/s): min=24552, max=26448, per=99.96%, avg=25760.00, stdev=888.61, samples=4 00:25:40.238 iops : min= 6138, max= 6612, avg=6440.00, stdev=222.15, samples=4 00:25:40.238 write: IOPS=6450, BW=25.2MiB/s (26.4MB/s)(50.6MiB/2009msec); 0 zone resets 00:25:40.238 slat (usec): min=2, max=138, avg= 3.52, stdev= 1.91 00:25:40.238 clat (usec): min=2872, max=17744, avg=9414.41, stdev=688.11 00:25:40.238 lat (usec): min=2897, max=17747, avg=9417.94, stdev=687.70 00:25:40.238 clat percentiles (usec): 00:25:40.238 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:25:40.238 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:25:40.238 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10290], 00:25:40.238 | 99.00th=[10814], 99.50th=[11076], 99.90th=[15664], 99.95th=[16909], 00:25:40.238 | 99.99th=[17695] 00:25:40.238 bw ( KiB/s): min=25648, max=25920, per=99.95%, avg=25792.00, stdev=136.24, samples=4 00:25:40.238 iops : min= 6412, max= 6480, avg=6448.00, stdev=34.06, samples=4 00:25:40.238 lat (msec) : 4=0.12%, 10=58.17%, 20=41.71% 00:25:40.238 cpu : usr=74.10%, sys=18.87%, ctx=13, majf=0, minf=1541 00:25:40.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:40.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:40.238 issued rwts: total=12943,12960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:40.238 00:25:40.238 Run status group 0 (all jobs): 00:25:40.238 READ: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.6MiB (53.0MB), run=2009-2009msec 00:25:40.238 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.6MiB (53.1MB), run=2009-2009msec 00:25:40.238 ----------------------------------------------------- 00:25:40.238 Suppressions used: 00:25:40.238 count bytes template 00:25:40.238 1 57 /usr/src/fio/parse.c 00:25:40.238 1 8 libtcmalloc_minimal.so 00:25:40.238 ----------------------------------------------------- 00:25:40.238 00:25:40.496 09:54:03 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:40.496 09:54:03 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:40.496 09:54:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:40.496 09:54:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:40.496 09:54:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:40.496 09:54:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:40.496 09:54:03 -- common/autotest_common.sh@1327 -- # shift 00:25:40.496 09:54:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:40.496 09:54:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.496 09:54:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:40.496 09:54:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:40.496 09:54:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:40.496 09:54:03 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:40.496 09:54:03 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:40.496 09:54:03 -- common/autotest_common.sh@1333 -- # break 00:25:40.496 09:54:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:40.496 09:54:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:40.496 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:40.496 fio-3.35 00:25:40.496 Starting 1 thread 00:25:43.035 00:25:43.035 test: (groupid=0, jobs=1): err= 0: pid=75614: Thu Apr 18 09:54:06 2024 00:25:43.035 read: IOPS=5780, BW=90.3MiB/s (94.7MB/s)(181MiB/2007msec) 00:25:43.035 slat (usec): min=3, max=141, avg= 5.47, stdev= 2.42 00:25:43.035 clat (usec): min=2459, max=26912, avg=12433.01, stdev=3406.48 00:25:43.035 lat (usec): min=2464, max=26918, avg=12438.48, stdev=3406.56 00:25:43.035 clat percentiles (usec): 00:25:43.035 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[ 9241], 00:25:43.035 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12256], 60.00th=[13173], 00:25:43.035 | 70.00th=[14091], 80.00th=[15270], 90.00th=[17171], 95.00th=[18220], 00:25:43.035 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22938], 99.95th=[23200], 00:25:43.035 | 99.99th=[25297] 00:25:43.035 bw ( KiB/s): min=45472, max=48768, per=51.30%, avg=47448.00, stdev=1615.58, samples=4 00:25:43.035 iops : min= 2842, max= 3048, avg=2965.50, stdev=100.97, samples=4 00:25:43.035 write: IOPS=3355, BW=52.4MiB/s (55.0MB/s)(96.5MiB/1841msec); 0 zone resets 00:25:43.035 slat (usec): min=38, max=210, avg=41.79, stdev= 6.51 00:25:43.035 clat (usec): min=7120, max=29580, avg=17212.56, stdev=3084.57 00:25:43.035 lat (usec): min=7193, max=29618, avg=17254.36, stdev=3084.70 00:25:43.035 clat percentiles (usec): 00:25:43.036 | 1.00th=[11338], 5.00th=[12911], 10.00th=[13698], 20.00th=[14615], 00:25:43.036 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16712], 60.00th=[17695], 00:25:43.036 | 70.00th=[18744], 80.00th=[20055], 90.00th=[21365], 95.00th=[22676], 00:25:43.036 | 99.00th=[25297], 99.50th=[26346], 99.90th=[28443], 99.95th=[29230], 00:25:43.036 | 99.99th=[29492] 00:25:43.036 bw ( KiB/s): min=48096, max=50944, per=92.05%, avg=49416.00, stdev=1378.94, samples=4 00:25:43.036 iops : min= 3006, max= 3184, avg=3088.50, stdev=86.18, samples=4 00:25:43.036 lat (msec) : 4=0.13%, 10=17.22%, 20=74.42%, 50=8.22% 00:25:43.036 cpu : usr=80.36%, sys=15.35%, ctx=24, majf=0, minf=2030 00:25:43.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:43.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.036 issued rwts: total=11601,6177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.036 00:25:43.036 Run status group 0 (all jobs): 00:25:43.036 READ: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=181MiB (190MB), run=2007-2007msec 00:25:43.036 WRITE: bw=52.4MiB/s (55.0MB/s), 52.4MiB/s-52.4MiB/s (55.0MB/s-55.0MB/s), io=96.5MiB (101MB), run=1841-1841msec 00:25:43.036 ----------------------------------------------------- 00:25:43.036 Suppressions used: 00:25:43.036 count bytes template 00:25:43.036 1 57 /usr/src/fio/parse.c 00:25:43.036 507 48672 /usr/src/fio/iolog.c 00:25:43.036 1 8 libtcmalloc_minimal.so 00:25:43.036 ----------------------------------------------------- 00:25:43.036 00:25:43.036 09:54:06 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.036 09:54:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.036 09:54:06 -- common/autotest_common.sh@10 -- # set +x 00:25:43.036 09:54:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.036 09:54:06 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:25:43.036 09:54:06 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:43.036 09:54:06 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:43.036 09:54:06 -- host/fio.sh@84 -- # nvmftestfini 00:25:43.036 09:54:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:43.036 09:54:06 -- nvmf/common.sh@117 -- # sync 00:25:43.036 09:54:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.036 09:54:06 -- nvmf/common.sh@120 -- # set +e 00:25:43.036 09:54:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.036 09:54:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.036 rmmod nvme_tcp 00:25:43.294 rmmod nvme_fabrics 00:25:43.294 rmmod nvme_keyring 00:25:43.294 09:54:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.294 09:54:06 -- nvmf/common.sh@124 -- # set -e 00:25:43.294 09:54:06 -- nvmf/common.sh@125 -- # return 0 00:25:43.294 09:54:06 -- nvmf/common.sh@478 -- # '[' -n 75525 ']' 00:25:43.294 09:54:06 -- nvmf/common.sh@479 -- # killprocess 75525 00:25:43.294 09:54:06 -- common/autotest_common.sh@936 -- # '[' -z 75525 ']' 00:25:43.294 09:54:06 -- common/autotest_common.sh@940 -- # kill -0 75525 00:25:43.294 09:54:06 -- common/autotest_common.sh@941 -- # uname 00:25:43.294 09:54:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.294 09:54:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75525 00:25:43.294 killing process with pid 75525 00:25:43.294 09:54:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:43.294 09:54:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:43.294 09:54:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75525' 00:25:43.294 09:54:06 -- common/autotest_common.sh@955 -- # kill 75525 00:25:43.294 09:54:06 -- common/autotest_common.sh@960 -- # wait 75525 00:25:44.669 09:54:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:44.669 09:54:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:44.669 09:54:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:44.669 09:54:07 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.669 09:54:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.669 09:54:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.669 09:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.669 09:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.669 09:54:07 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:44.669 ************************************ 00:25:44.669 END TEST nvmf_fio_host 00:25:44.669 ************************************ 00:25:44.669 00:25:44.669 real 0m8.710s 00:25:44.669 user 0m32.846s 00:25:44.669 sys 0m2.281s 00:25:44.669 09:54:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.669 09:54:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.669 09:54:07 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:44.669 09:54:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:44.669 09:54:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.669 09:54:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.669 ************************************ 00:25:44.669 START TEST nvmf_failover 00:25:44.669 ************************************ 00:25:44.669 09:54:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:44.929 * Looking for test storage... 00:25:44.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:44.929 09:54:08 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:44.929 09:54:08 -- nvmf/common.sh@7 -- # uname -s 00:25:44.929 09:54:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.929 09:54:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.929 09:54:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.929 09:54:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.929 09:54:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.929 09:54:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.929 09:54:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.929 09:54:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.929 09:54:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.929 09:54:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.929 09:54:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:44.929 09:54:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:25:44.929 09:54:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.929 09:54:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.929 09:54:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:44.929 09:54:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.929 09:54:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:44.929 09:54:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.929 09:54:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.929 09:54:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.929 09:54:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.929 09:54:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.929 09:54:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.929 09:54:08 -- paths/export.sh@5 -- # export PATH 00:25:44.929 09:54:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.929 09:54:08 -- nvmf/common.sh@47 -- # : 0 00:25:44.929 09:54:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.929 09:54:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.929 09:54:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.929 09:54:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.929 09:54:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.929 09:54:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.929 09:54:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.929 09:54:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.929 09:54:08 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.929 09:54:08 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.929 09:54:08 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.929 09:54:08 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:44.929 09:54:08 -- host/failover.sh@18 -- # nvmftestinit 00:25:44.929 09:54:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:44.929 09:54:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.929 09:54:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:44.929 09:54:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:44.929 09:54:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:44.929 09:54:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.929 09:54:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.929 09:54:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.929 09:54:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:44.929 09:54:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:44.929 09:54:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:44.929 09:54:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:44.929 09:54:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:44.929 09:54:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:44.929 09:54:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.929 09:54:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.929 09:54:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:44.929 09:54:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:44.929 09:54:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:44.929 09:54:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:44.929 09:54:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:44.929 09:54:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.929 09:54:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:44.929 09:54:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:44.929 09:54:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:44.929 09:54:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:44.929 09:54:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:44.929 09:54:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:44.929 Cannot find device "nvmf_tgt_br" 00:25:44.929 09:54:08 -- nvmf/common.sh@155 -- # true 00:25:44.929 09:54:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:44.929 Cannot find device "nvmf_tgt_br2" 00:25:44.929 09:54:08 -- nvmf/common.sh@156 -- # true 00:25:44.929 09:54:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:44.929 09:54:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:44.929 Cannot find device "nvmf_tgt_br" 00:25:44.929 09:54:08 -- nvmf/common.sh@158 -- # true 00:25:44.929 09:54:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:44.929 Cannot find device "nvmf_tgt_br2" 00:25:44.929 09:54:08 -- nvmf/common.sh@159 -- # true 00:25:44.929 09:54:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:44.929 09:54:08 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:44.929 09:54:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:44.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:44.929 09:54:08 -- nvmf/common.sh@162 -- # true 00:25:44.929 09:54:08 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:44.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:44.929 09:54:08 -- nvmf/common.sh@163 -- # true 00:25:44.929 09:54:08 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:44.929 09:54:08 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:44.929 09:54:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:44.929 09:54:08 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:44.929 09:54:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:44.929 09:54:08 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:44.929 09:54:08 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:44.929 09:54:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:45.196 09:54:08 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:45.196 09:54:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:45.196 09:54:08 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:45.196 09:54:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:45.196 09:54:08 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:45.196 09:54:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:45.196 09:54:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:45.196 09:54:08 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:45.196 09:54:08 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:45.196 09:54:08 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:45.196 09:54:08 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:45.196 09:54:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:45.196 09:54:08 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:45.196 09:54:08 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:45.196 09:54:08 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:45.196 09:54:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:45.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:25:45.196 00:25:45.196 --- 10.0.0.2 ping statistics --- 00:25:45.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.196 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:45.196 09:54:08 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:45.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:45.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:25:45.196 00:25:45.196 --- 10.0.0.3 ping statistics --- 00:25:45.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.196 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:45.196 09:54:08 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:45.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:25:45.196 00:25:45.196 --- 10.0.0.1 ping statistics --- 00:25:45.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.196 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:25:45.196 09:54:08 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.196 09:54:08 -- nvmf/common.sh@422 -- # return 0 00:25:45.196 09:54:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:45.196 09:54:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.196 09:54:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:45.196 09:54:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:45.196 09:54:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.196 09:54:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:45.196 09:54:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:45.196 09:54:08 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:45.196 09:54:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:45.196 09:54:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:45.196 09:54:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.196 09:54:08 -- nvmf/common.sh@470 -- # nvmfpid=75849 00:25:45.196 09:54:08 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:45.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.196 09:54:08 -- nvmf/common.sh@471 -- # waitforlisten 75849 00:25:45.196 09:54:08 -- common/autotest_common.sh@817 -- # '[' -z 75849 ']' 00:25:45.196 09:54:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.196 09:54:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.196 09:54:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.196 09:54:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.196 09:54:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.196 [2024-04-18 09:54:08.571908] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:45.196 [2024-04-18 09:54:08.572261] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.454 [2024-04-18 09:54:08.741934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.713 [2024-04-18 09:54:09.017997] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.713 [2024-04-18 09:54:09.018307] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.713 [2024-04-18 09:54:09.018345] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.713 [2024-04-18 09:54:09.018376] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.713 [2024-04-18 09:54:09.018393] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.713 [2024-04-18 09:54:09.019165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.713 [2024-04-18 09:54:09.019309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.713 [2024-04-18 09:54:09.019324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.281 09:54:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.281 09:54:09 -- common/autotest_common.sh@850 -- # return 0 00:25:46.281 09:54:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:46.281 09:54:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.281 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.281 09:54:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.281 09:54:09 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:46.540 [2024-04-18 09:54:09.726063] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.540 09:54:09 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:46.796 Malloc0 00:25:46.796 09:54:10 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.054 09:54:10 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.313 09:54:10 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.571 [2024-04-18 09:54:10.825022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.571 09:54:10 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:47.828 [2024-04-18 09:54:11.093449] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:47.828 09:54:11 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:48.086 [2024-04-18 09:54:11.317866] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:48.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.086 09:54:11 -- host/failover.sh@31 -- # bdevperf_pid=75911 00:25:48.086 09:54:11 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:48.086 09:54:11 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.086 09:54:11 -- host/failover.sh@34 -- # waitforlisten 75911 /var/tmp/bdevperf.sock 00:25:48.086 09:54:11 -- common/autotest_common.sh@817 -- # '[' -z 75911 ']' 00:25:48.086 09:54:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.086 09:54:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:48.086 09:54:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.086 09:54:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:48.086 09:54:11 -- common/autotest_common.sh@10 -- # set +x 00:25:49.021 09:54:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:49.021 09:54:12 -- common/autotest_common.sh@850 -- # return 0 00:25:49.021 09:54:12 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.279 NVMe0n1 00:25:49.279 09:54:12 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.847 00:25:49.847 09:54:12 -- host/failover.sh@39 -- # run_test_pid=75930 00:25:49.847 09:54:12 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.847 09:54:12 -- host/failover.sh@41 -- # sleep 1 00:25:50.783 09:54:13 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.041 [2024-04-18 09:54:14.202267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.203999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.204936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.205024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.205104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.205205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.205399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 [2024-04-18 09:54:14.205501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:51.042 09:54:14 -- host/failover.sh@45 -- # sleep 3 00:25:54.327 09:54:17 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:54.327 00:25:54.327 09:54:17 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:54.586 [2024-04-18 09:54:17.892528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 [2024-04-18 09:54:17.892612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 [2024-04-18 09:54:17.892635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 [2024-04-18 09:54:17.892648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 [2024-04-18 09:54:17.892663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 [2024-04-18 09:54:17.892676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 [2024-04-18 09:54:17.892692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:54.586 09:54:17 -- host/failover.sh@50 -- # sleep 3 00:25:58.060 09:54:20 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.060 [2024-04-18 09:54:21.142042] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.060 09:54:21 -- host/failover.sh@55 -- # sleep 1 00:25:59.026 09:54:22 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:59.290 [2024-04-18 09:54:22.435495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 [2024-04-18 09:54:22.435827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:59.290 09:54:22 -- host/failover.sh@59 -- # wait 75930 00:26:05.849 0 00:26:05.849 09:54:28 -- host/failover.sh@61 -- # killprocess 75911 00:26:05.849 09:54:28 -- common/autotest_common.sh@936 -- # '[' -z 75911 ']' 00:26:05.849 09:54:28 -- common/autotest_common.sh@940 -- # kill -0 75911 00:26:05.849 09:54:28 -- common/autotest_common.sh@941 -- # uname 00:26:05.849 09:54:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.849 09:54:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75911 00:26:05.849 killing process with pid 75911 00:26:05.849 09:54:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:05.849 09:54:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:05.849 09:54:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75911' 00:26:05.849 09:54:28 -- common/autotest_common.sh@955 -- # kill 75911 00:26:05.849 09:54:28 -- common/autotest_common.sh@960 -- # wait 75911 00:26:06.115 09:54:29 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:06.115 [2024-04-18 09:54:11.426160] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:06.115 [2024-04-18 09:54:11.426416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75911 ] 00:26:06.115 [2024-04-18 09:54:11.588075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.115 [2024-04-18 09:54:11.830824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.115 Running I/O for 15 seconds... 00:26:06.115 [2024-04-18 09:54:14.205699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.205796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.205854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.205878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.205906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.205927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.205954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.205974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.205998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.206946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-04-18 09:54:14.206975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.115 [2024-04-18 09:54:14.207446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-04-18 09:54:14.207467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.207513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.207559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.207614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.207661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.207707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.207752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.207798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.207847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.207892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.207961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.207988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-04-18 09:54:14.208925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.208970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.208997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.209017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.209042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.209062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.209086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.209106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.209142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.209164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.209188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.209208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.209232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-04-18 09:54:14.209252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-04-18 09:54:14.209275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.209953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.209979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-04-18 09:54:14.210786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.210949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.210980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.211006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.211026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.211051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-04-18 09:54:14.211070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.117 [2024-04-18 09:54:14.211097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:14.211542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:14.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.211872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007440 is same with the state(5) to be set 00:26:06.118 [2024-04-18 09:54:14.211899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:06.118 [2024-04-18 09:54:14.211915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:06.118 [2024-04-18 09:54:14.211931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51832 len:8 PRP1 0x0 PRP2 0x0 00:26:06.118 [2024-04-18 09:54:14.211951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.212254] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007440 was disconnected and freed. reset controller. 00:26:06.118 [2024-04-18 09:54:14.212294] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:06.118 [2024-04-18 09:54:14.212380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.118 [2024-04-18 09:54:14.212408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.212430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.118 [2024-04-18 09:54:14.212448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.212467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.118 [2024-04-18 09:54:14.212484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.212503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.118 [2024-04-18 09:54:14.212520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:14.212538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.118 [2024-04-18 09:54:14.212628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:06.118 [2024-04-18 09:54:14.216757] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.118 [2024-04-18 09:54:14.248115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:06.118 [2024-04-18 09:54:17.892800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.892866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.892909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.892961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.892999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.893019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.893059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.893165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.893207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.118 [2024-04-18 09:54:17.893247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:17.893293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:17.893334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:17.893386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:17.893426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.118 [2024-04-18 09:54:17.893447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-04-18 09:54:17.893465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.893505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.893566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.893606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.893977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.893998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-04-18 09:54:17.894674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.894716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.894756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.894796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-04-18 09:54:17.894836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-04-18 09:54:17.894857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.894876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.894897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.894915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.894937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.894955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.894986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.895706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.895952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.895973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.896000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-04-18 09:54:17.896040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-04-18 09:54:17.896562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.120 [2024-04-18 09:54:17.896583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.896601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.896645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.896685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.896724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.896763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.896808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.896848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.896889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.896927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.896967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.896997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-04-18 09:54:17.897761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.897977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.897998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.121 [2024-04-18 09:54:17.898271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.121 [2024-04-18 09:54:17.898292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:17.898310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.898332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:17.898350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.898371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:17.898390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.898409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000008240 is same with the state(5) to be set 00:26:06.122 [2024-04-18 09:54:17.898456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:06.122 [2024-04-18 09:54:17.898473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:06.122 [2024-04-18 09:54:17.898490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108968 len:8 PRP1 0x0 PRP2 0x0 00:26:06.122 [2024-04-18 09:54:17.898521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.898811] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008240 was disconnected and freed. reset controller. 00:26:06.122 [2024-04-18 09:54:17.898838] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:06.122 [2024-04-18 09:54:17.898921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.122 [2024-04-18 09:54:17.898958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.898986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.122 [2024-04-18 09:54:17.899011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.899031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.122 [2024-04-18 09:54:17.899049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.899068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.122 [2024-04-18 09:54:17.899085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:17.899103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.122 [2024-04-18 09:54:17.899182] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:06.122 [2024-04-18 09:54:17.903456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.122 [2024-04-18 09:54:17.940272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:06.122 [2024-04-18 09:54:22.436212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.436948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.436984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.437841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.437909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.437949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.437982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.438019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.438078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.438119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.438175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.438253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.438288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.438360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-04-18 09:54:22.438392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-04-18 09:54:22.438429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-04-18 09:54:22.438463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.438657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.438736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.438802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.438882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.438949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.438988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.439021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.439959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.439994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.440808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.440876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.440951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.440989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.441019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.441090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.441185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.441260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.441330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.123 [2024-04-18 09:54:22.441413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-04-18 09:54:22.441492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.123 [2024-04-18 09:54:22.441528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.441599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.441669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.441740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.441810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.441881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.441946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.441982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.442629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.442704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.442774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.442843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.442935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.443013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.443079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.443176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-04-18 09:54:22.443248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.443950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-04-18 09:54:22.443987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.124 [2024-04-18 09:54:22.444022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-04-18 09:54:22.444740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.444809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.444881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.444952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.444989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.125 [2024-04-18 09:54:22.445852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.445979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:06.125 [2024-04-18 09:54:22.446014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:06.125 [2024-04-18 09:54:22.446041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 00:26:06.125 [2024-04-18 09:54:22.446077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.446469] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009440 was disconnected and freed. reset controller. 00:26:06.125 [2024-04-18 09:54:22.446514] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:06.125 [2024-04-18 09:54:22.446669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.125 [2024-04-18 09:54:22.446715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.446770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.125 [2024-04-18 09:54:22.446803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.446840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.125 [2024-04-18 09:54:22.446876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.446908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.125 [2024-04-18 09:54:22.446938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-04-18 09:54:22.446971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.125 [2024-04-18 09:54:22.447165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:06.125 [2024-04-18 09:54:22.452863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.125 [2024-04-18 09:54:22.503955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:06.125 00:26:06.125 Latency(us) 00:26:06.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.125 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:06.125 Verification LBA range: start 0x0 length 0x4000 00:26:06.125 NVMe0n1 : 15.01 6067.64 23.70 208.17 0.00 20357.33 1087.30 24784.52 00:26:06.125 =================================================================================================================== 00:26:06.125 Total : 6067.64 23.70 208.17 0.00 20357.33 1087.30 24784.52 00:26:06.125 Received shutdown signal, test time was about 15.000000 seconds 00:26:06.125 00:26:06.125 Latency(us) 00:26:06.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.125 =================================================================================================================== 00:26:06.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.125 09:54:29 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:06.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:06.125 09:54:29 -- host/failover.sh@65 -- # count=3 00:26:06.125 09:54:29 -- host/failover.sh@67 -- # (( count != 3 )) 00:26:06.125 09:54:29 -- host/failover.sh@73 -- # bdevperf_pid=76110 00:26:06.125 09:54:29 -- host/failover.sh@75 -- # waitforlisten 76110 /var/tmp/bdevperf.sock 00:26:06.125 09:54:29 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:06.125 09:54:29 -- common/autotest_common.sh@817 -- # '[' -z 76110 ']' 00:26:06.125 09:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:06.125 09:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:06.126 09:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:06.126 09:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:06.126 09:54:29 -- common/autotest_common.sh@10 -- # set +x 00:26:07.062 09:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:07.062 09:54:30 -- common/autotest_common.sh@850 -- # return 0 00:26:07.062 09:54:30 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:07.321 [2024-04-18 09:54:30.560648] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:07.321 09:54:30 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:07.580 [2024-04-18 09:54:30.785006] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:07.580 09:54:30 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:07.838 NVMe0n1 00:26:07.838 09:54:31 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:08.098 00:26:08.098 09:54:31 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:08.358 00:26:08.358 09:54:31 -- host/failover.sh@82 -- # grep -q NVMe0 00:26:08.358 09:54:31 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:08.976 09:54:32 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:08.976 09:54:32 -- host/failover.sh@87 -- # sleep 3 00:26:12.267 09:54:35 -- host/failover.sh@88 -- # grep -q NVMe0 00:26:12.267 09:54:35 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:12.267 09:54:35 -- host/failover.sh@90 -- # run_test_pid=76187 00:26:12.267 09:54:35 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:12.267 09:54:35 -- host/failover.sh@92 -- # wait 76187 00:26:13.639 0 00:26:13.639 09:54:36 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:13.639 [2024-04-18 09:54:29.454814] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:13.639 [2024-04-18 09:54:29.455038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76110 ] 00:26:13.639 [2024-04-18 09:54:29.631760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.639 [2024-04-18 09:54:29.880216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.639 [2024-04-18 09:54:32.258972] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:13.639 [2024-04-18 09:54:32.259163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.639 [2024-04-18 09:54:32.259204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.639 [2024-04-18 09:54:32.259233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.639 [2024-04-18 09:54:32.259257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.639 [2024-04-18 09:54:32.259277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.639 [2024-04-18 09:54:32.259298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.639 [2024-04-18 09:54:32.259318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.639 [2024-04-18 09:54:32.259339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.639 [2024-04-18 09:54:32.259364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:13.639 [2024-04-18 09:54:32.259455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:13.639 [2024-04-18 09:54:32.259507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:13.639 [2024-04-18 09:54:32.264314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:13.639 Running I/O for 1 seconds... 00:26:13.639 00:26:13.639 Latency(us) 00:26:13.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.639 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:13.639 Verification LBA range: start 0x0 length 0x4000 00:26:13.639 NVMe0n1 : 1.01 5206.14 20.34 0.00 0.00 24486.48 3395.96 22163.08 00:26:13.639 =================================================================================================================== 00:26:13.639 Total : 5206.14 20.34 0.00 0.00 24486.48 3395.96 22163.08 00:26:13.639 09:54:36 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:13.639 09:54:36 -- host/failover.sh@95 -- # grep -q NVMe0 00:26:13.639 09:54:36 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:13.897 09:54:37 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:13.897 09:54:37 -- host/failover.sh@99 -- # grep -q NVMe0 00:26:14.463 09:54:37 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:14.463 09:54:37 -- host/failover.sh@101 -- # sleep 3 00:26:17.790 09:54:40 -- host/failover.sh@103 -- # grep -q NVMe0 00:26:17.790 09:54:40 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:17.790 09:54:41 -- host/failover.sh@108 -- # killprocess 76110 00:26:17.790 09:54:41 -- common/autotest_common.sh@936 -- # '[' -z 76110 ']' 00:26:17.790 09:54:41 -- common/autotest_common.sh@940 -- # kill -0 76110 00:26:17.790 09:54:41 -- common/autotest_common.sh@941 -- # uname 00:26:17.790 09:54:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.790 09:54:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76110 00:26:17.790 killing process with pid 76110 00:26:17.790 09:54:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:17.790 09:54:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:17.790 09:54:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76110' 00:26:17.790 09:54:41 -- common/autotest_common.sh@955 -- # kill 76110 00:26:17.790 09:54:41 -- common/autotest_common.sh@960 -- # wait 76110 00:26:19.164 09:54:42 -- host/failover.sh@110 -- # sync 00:26:19.164 09:54:42 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.423 09:54:42 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:19.423 09:54:42 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:19.423 09:54:42 -- host/failover.sh@116 -- # nvmftestfini 00:26:19.423 09:54:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:19.423 09:54:42 -- nvmf/common.sh@117 -- # sync 00:26:19.423 09:54:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.423 09:54:42 -- nvmf/common.sh@120 -- # set +e 00:26:19.423 09:54:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.423 09:54:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.423 rmmod nvme_tcp 00:26:19.423 rmmod nvme_fabrics 00:26:19.423 rmmod nvme_keyring 00:26:19.423 09:54:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.423 09:54:42 -- nvmf/common.sh@124 -- # set -e 00:26:19.423 09:54:42 -- nvmf/common.sh@125 -- # return 0 00:26:19.423 09:54:42 -- nvmf/common.sh@478 -- # '[' -n 75849 ']' 00:26:19.423 09:54:42 -- nvmf/common.sh@479 -- # killprocess 75849 00:26:19.423 09:54:42 -- common/autotest_common.sh@936 -- # '[' -z 75849 ']' 00:26:19.423 09:54:42 -- common/autotest_common.sh@940 -- # kill -0 75849 00:26:19.423 09:54:42 -- common/autotest_common.sh@941 -- # uname 00:26:19.423 09:54:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.423 09:54:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75849 00:26:19.423 killing process with pid 75849 00:26:19.423 09:54:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:19.423 09:54:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:19.423 09:54:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75849' 00:26:19.423 09:54:42 -- common/autotest_common.sh@955 -- # kill 75849 00:26:19.423 09:54:42 -- common/autotest_common.sh@960 -- # wait 75849 00:26:20.802 09:54:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:20.802 09:54:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:20.802 09:54:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:20.802 09:54:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:20.802 09:54:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:20.802 09:54:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.802 09:54:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.802 09:54:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.802 09:54:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:20.802 00:26:20.802 real 0m36.021s 00:26:20.802 user 2m17.918s 00:26:20.802 sys 0m5.283s 00:26:20.802 09:54:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:20.802 ************************************ 00:26:20.802 09:54:44 -- common/autotest_common.sh@10 -- # set +x 00:26:20.802 END TEST nvmf_failover 00:26:20.802 ************************************ 00:26:20.802 09:54:44 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:20.802 09:54:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:20.802 09:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.802 09:54:44 -- common/autotest_common.sh@10 -- # set +x 00:26:20.802 ************************************ 00:26:20.802 START TEST nvmf_discovery 00:26:20.802 ************************************ 00:26:20.802 09:54:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.060 * Looking for test storage... 00:26:21.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:21.060 09:54:44 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:21.060 09:54:44 -- nvmf/common.sh@7 -- # uname -s 00:26:21.060 09:54:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.060 09:54:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.060 09:54:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.060 09:54:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.060 09:54:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.060 09:54:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.060 09:54:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.060 09:54:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.060 09:54:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.060 09:54:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.060 09:54:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:21.060 09:54:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:21.060 09:54:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.060 09:54:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.060 09:54:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:21.060 09:54:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.060 09:54:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:21.060 09:54:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.060 09:54:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.060 09:54:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.060 09:54:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 09:54:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 09:54:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 09:54:44 -- paths/export.sh@5 -- # export PATH 00:26:21.060 09:54:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 09:54:44 -- nvmf/common.sh@47 -- # : 0 00:26:21.060 09:54:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.060 09:54:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.060 09:54:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.060 09:54:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.060 09:54:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.060 09:54:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.060 09:54:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.060 09:54:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.060 09:54:44 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:21.060 09:54:44 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:21.060 09:54:44 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:21.060 09:54:44 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:21.060 09:54:44 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:21.060 09:54:44 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:21.060 09:54:44 -- host/discovery.sh@25 -- # nvmftestinit 00:26:21.060 09:54:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:21.060 09:54:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.060 09:54:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:21.060 09:54:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:21.060 09:54:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:21.060 09:54:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.060 09:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.060 09:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.060 09:54:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:21.060 09:54:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:21.060 09:54:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:21.060 09:54:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:21.060 09:54:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:21.060 09:54:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:21.060 09:54:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.060 09:54:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.060 09:54:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:21.060 09:54:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:21.060 09:54:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:21.060 09:54:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:21.060 09:54:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:21.060 09:54:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.060 09:54:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:21.060 09:54:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:21.060 09:54:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:21.060 09:54:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:21.060 09:54:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:21.060 09:54:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:21.060 Cannot find device "nvmf_tgt_br" 00:26:21.060 09:54:44 -- nvmf/common.sh@155 -- # true 00:26:21.060 09:54:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:21.060 Cannot find device "nvmf_tgt_br2" 00:26:21.060 09:54:44 -- nvmf/common.sh@156 -- # true 00:26:21.060 09:54:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:21.060 09:54:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:21.060 Cannot find device "nvmf_tgt_br" 00:26:21.060 09:54:44 -- nvmf/common.sh@158 -- # true 00:26:21.060 09:54:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:21.060 Cannot find device "nvmf_tgt_br2" 00:26:21.060 09:54:44 -- nvmf/common.sh@159 -- # true 00:26:21.060 09:54:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:21.060 09:54:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:21.060 09:54:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:21.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:21.060 09:54:44 -- nvmf/common.sh@162 -- # true 00:26:21.060 09:54:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:21.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:21.060 09:54:44 -- nvmf/common.sh@163 -- # true 00:26:21.060 09:54:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:21.060 09:54:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:21.060 09:54:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:21.060 09:54:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:21.060 09:54:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:21.060 09:54:44 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:21.060 09:54:44 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:21.060 09:54:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:21.317 09:54:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:21.317 09:54:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:21.317 09:54:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:21.317 09:54:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:21.317 09:54:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:21.317 09:54:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:21.317 09:54:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:21.317 09:54:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:21.317 09:54:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:21.317 09:54:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:21.317 09:54:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:21.317 09:54:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:21.317 09:54:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:21.317 09:54:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:21.317 09:54:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:21.317 09:54:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:21.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:26:21.317 00:26:21.317 --- 10.0.0.2 ping statistics --- 00:26:21.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.317 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:21.317 09:54:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:21.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:21.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:26:21.317 00:26:21.317 --- 10.0.0.3 ping statistics --- 00:26:21.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.317 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:21.317 09:54:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:21.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:21.317 00:26:21.317 --- 10.0.0.1 ping statistics --- 00:26:21.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.317 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:21.317 09:54:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.317 09:54:44 -- nvmf/common.sh@422 -- # return 0 00:26:21.317 09:54:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:21.317 09:54:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.317 09:54:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:21.317 09:54:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:21.317 09:54:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.317 09:54:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:21.318 09:54:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:21.318 09:54:44 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:21.318 09:54:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:21.318 09:54:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:21.318 09:54:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.318 09:54:44 -- nvmf/common.sh@470 -- # nvmfpid=76484 00:26:21.318 09:54:44 -- nvmf/common.sh@471 -- # waitforlisten 76484 00:26:21.318 09:54:44 -- common/autotest_common.sh@817 -- # '[' -z 76484 ']' 00:26:21.318 09:54:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:21.318 09:54:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.318 09:54:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:21.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.318 09:54:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.318 09:54:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:21.318 09:54:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.575 [2024-04-18 09:54:44.728763] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:21.575 [2024-04-18 09:54:44.728943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.575 [2024-04-18 09:54:44.902280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.832 [2024-04-18 09:54:45.182742] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.832 [2024-04-18 09:54:45.182834] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.832 [2024-04-18 09:54:45.182857] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.832 [2024-04-18 09:54:45.182887] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.832 [2024-04-18 09:54:45.182904] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.833 [2024-04-18 09:54:45.182954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.399 09:54:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:22.399 09:54:45 -- common/autotest_common.sh@850 -- # return 0 00:26:22.399 09:54:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:22.399 09:54:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:22.399 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.399 09:54:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.399 09:54:45 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.399 09:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.399 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.399 [2024-04-18 09:54:45.630655] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.399 09:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.400 09:54:45 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:22.400 09:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.400 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.400 [2024-04-18 09:54:45.638821] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:22.400 09:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.400 09:54:45 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:22.400 09:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.400 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.400 null0 00:26:22.400 09:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.400 09:54:45 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:22.400 09:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.400 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.400 null1 00:26:22.400 09:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.400 09:54:45 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:22.400 09:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.400 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.400 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:22.400 09:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.400 09:54:45 -- host/discovery.sh@45 -- # hostpid=76516 00:26:22.400 09:54:45 -- host/discovery.sh@46 -- # waitforlisten 76516 /tmp/host.sock 00:26:22.400 09:54:45 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:22.400 09:54:45 -- common/autotest_common.sh@817 -- # '[' -z 76516 ']' 00:26:22.400 09:54:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:22.400 09:54:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:22.400 09:54:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:22.400 09:54:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:22.400 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.400 [2024-04-18 09:54:45.774155] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:22.400 [2024-04-18 09:54:45.775025] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76516 ] 00:26:22.658 [2024-04-18 09:54:45.962786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.917 [2024-04-18 09:54:46.236856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.483 09:54:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.483 09:54:46 -- common/autotest_common.sh@850 -- # return 0 00:26:23.483 09:54:46 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:23.483 09:54:46 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:23.483 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.483 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.483 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@72 -- # notify_id=0 00:26:23.484 09:54:46 -- host/discovery.sh@83 -- # get_subsystem_names 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # sort 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # xargs 00:26:23.484 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:23.484 09:54:46 -- host/discovery.sh@84 -- # get_bdev_list 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # sort 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # xargs 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:23.484 09:54:46 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@87 -- # get_subsystem_names 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # sort 00:26:23.484 09:54:46 -- host/discovery.sh@59 -- # xargs 00:26:23.484 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:23.484 09:54:46 -- host/discovery.sh@88 -- # get_bdev_list 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # sort 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- host/discovery.sh@55 -- # xargs 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.484 09:54:46 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:23.484 09:54:46 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:23.484 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.484 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.743 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.743 09:54:46 -- host/discovery.sh@91 -- # get_subsystem_names 00:26:23.743 09:54:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.743 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.743 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.743 09:54:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.743 09:54:46 -- host/discovery.sh@59 -- # sort 00:26:23.743 09:54:46 -- host/discovery.sh@59 -- # xargs 00:26:23.743 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.743 09:54:46 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:23.743 09:54:46 -- host/discovery.sh@92 -- # get_bdev_list 00:26:23.743 09:54:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.743 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.743 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.743 09:54:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.743 09:54:46 -- host/discovery.sh@55 -- # sort 00:26:23.743 09:54:46 -- host/discovery.sh@55 -- # xargs 00:26:23.743 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.743 09:54:46 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:23.743 09:54:46 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.743 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.743 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.743 [2024-04-18 09:54:46.991371] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.744 09:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.744 09:54:46 -- host/discovery.sh@97 -- # get_subsystem_names 00:26:23.744 09:54:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.744 09:54:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.744 09:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.744 09:54:46 -- host/discovery.sh@59 -- # sort 00:26:23.744 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.744 09:54:46 -- host/discovery.sh@59 -- # xargs 00:26:23.744 09:54:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.744 09:54:47 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:23.744 09:54:47 -- host/discovery.sh@98 -- # get_bdev_list 00:26:23.744 09:54:47 -- host/discovery.sh@55 -- # xargs 00:26:23.744 09:54:47 -- host/discovery.sh@55 -- # sort 00:26:23.744 09:54:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.744 09:54:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.744 09:54:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.744 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:26:23.744 09:54:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.744 09:54:47 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:23.744 09:54:47 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:23.744 09:54:47 -- host/discovery.sh@79 -- # expected_count=0 00:26:23.744 09:54:47 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.744 09:54:47 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.744 09:54:47 -- common/autotest_common.sh@901 -- # local max=10 00:26:23.744 09:54:47 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:23.744 09:54:47 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.744 09:54:47 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:23.744 09:54:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:23.744 09:54:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.744 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:26:23.744 09:54:47 -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.744 09:54:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 09:54:47 -- host/discovery.sh@74 -- # notification_count=0 00:26:24.002 09:54:47 -- host/discovery.sh@75 -- # notify_id=0 00:26:24.002 09:54:47 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:24.002 09:54:47 -- common/autotest_common.sh@904 -- # return 0 00:26:24.002 09:54:47 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:24.002 09:54:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.002 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:26:24.002 09:54:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 09:54:47 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.002 09:54:47 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.002 09:54:47 -- common/autotest_common.sh@901 -- # local max=10 00:26:24.002 09:54:47 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:24.002 09:54:47 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.002 09:54:47 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:24.002 09:54:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.002 09:54:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.002 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:26:24.002 09:54:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.002 09:54:47 -- host/discovery.sh@59 -- # sort 00:26:24.002 09:54:47 -- host/discovery.sh@59 -- # xargs 00:26:24.002 09:54:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 09:54:47 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:26:24.002 09:54:47 -- common/autotest_common.sh@906 -- # sleep 1 00:26:24.570 [2024-04-18 09:54:47.655978] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:24.570 [2024-04-18 09:54:47.656046] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:24.570 [2024-04-18 09:54:47.656090] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.570 [2024-04-18 09:54:47.662033] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:24.570 [2024-04-18 09:54:47.726535] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:24.570 [2024-04-18 09:54:47.726592] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:25.137 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:25.137 09:54:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.137 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.137 09:54:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.137 09:54:48 -- host/discovery.sh@59 -- # sort 00:26:25.137 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.137 09:54:48 -- host/discovery.sh@59 -- # xargs 00:26:25.137 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.137 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.137 09:54:48 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.137 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:25.137 09:54:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.137 09:54:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.137 09:54:48 -- host/discovery.sh@55 -- # sort 00:26:25.137 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.137 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.137 09:54:48 -- host/discovery.sh@55 -- # xargs 00:26:25.137 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:25.137 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.137 09:54:48 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.137 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:25.137 09:54:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.137 09:54:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.137 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.137 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.137 09:54:48 -- host/discovery.sh@63 -- # sort -n 00:26:25.137 09:54:48 -- host/discovery.sh@63 -- # xargs 00:26:25.137 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:26:25.137 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.137 09:54:48 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:25.137 09:54:48 -- host/discovery.sh@79 -- # expected_count=1 00:26:25.137 09:54:48 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.137 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.137 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.137 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.137 09:54:48 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.138 09:54:48 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:25.138 09:54:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:25.138 09:54:48 -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.138 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.138 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.138 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.138 09:54:48 -- host/discovery.sh@74 -- # notification_count=1 00:26:25.138 09:54:48 -- host/discovery.sh@75 -- # notify_id=1 00:26:25.138 09:54:48 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:25.138 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.138 09:54:48 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:25.138 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.138 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.138 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.138 09:54:48 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.138 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.138 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.138 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.138 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:25.138 09:54:48 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:25.138 09:54:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.138 09:54:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.138 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.138 09:54:48 -- host/discovery.sh@55 -- # sort 00:26:25.138 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.138 09:54:48 -- host/discovery.sh@55 -- # xargs 00:26:25.138 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.397 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.397 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.397 09:54:48 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:25.397 09:54:48 -- host/discovery.sh@79 -- # expected_count=1 00:26:25.397 09:54:48 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.397 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.397 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.397 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.397 09:54:48 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.397 09:54:48 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:25.397 09:54:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:25.397 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.397 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.397 09:54:48 -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.397 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.397 09:54:48 -- host/discovery.sh@74 -- # notification_count=1 00:26:25.397 09:54:48 -- host/discovery.sh@75 -- # notify_id=2 00:26:25.397 09:54:48 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:25.397 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.397 09:54:48 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:25.397 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.397 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.397 [2024-04-18 09:54:48.586518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:25.397 [2024-04-18 09:54:48.587881] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:25.397 [2024-04-18 09:54:48.588090] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.397 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.397 09:54:48 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.397 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.397 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.397 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.397 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.397 09:54:48 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:25.397 [2024-04-18 09:54:48.593892] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:25.397 09:54:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.397 09:54:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.397 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.397 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.397 09:54:48 -- host/discovery.sh@59 -- # xargs 00:26:25.397 09:54:48 -- host/discovery.sh@59 -- # sort 00:26:25.398 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.398 [2024-04-18 09:54:48.657460] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:25.398 [2024-04-18 09:54:48.657521] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:25.398 [2024-04-18 09:54:48.657535] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.398 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.398 09:54:48 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.398 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.398 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.398 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:25.398 09:54:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.398 09:54:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.398 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.398 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.398 09:54:48 -- host/discovery.sh@55 -- # xargs 00:26:25.398 09:54:48 -- host/discovery.sh@55 -- # sort 00:26:25.398 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.398 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.398 09:54:48 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:25.398 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:25.398 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.398 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:25.398 09:54:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.398 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.398 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.398 09:54:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.398 09:54:48 -- host/discovery.sh@63 -- # xargs 00:26:25.398 09:54:48 -- host/discovery.sh@63 -- # sort -n 00:26:25.398 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:25.398 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.398 09:54:48 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:25.398 09:54:48 -- host/discovery.sh@79 -- # expected_count=0 00:26:25.398 09:54:48 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.398 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.398 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.398 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.398 09:54:48 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:25.398 09:54:48 -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.398 09:54:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.398 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.398 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.657 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.657 09:54:48 -- host/discovery.sh@74 -- # notification_count=0 00:26:25.657 09:54:48 -- host/discovery.sh@75 -- # notify_id=2 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:25.657 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.657 09:54:48 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:25.657 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.657 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.657 [2024-04-18 09:54:48.839913] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:25.657 [2024-04-18 09:54:48.840167] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.657 [2024-04-18 09:54:48.842798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.657 [2024-04-18 09:54:48.842987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.657 [2024-04-18 09:54:48.843147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.657 id:0 cdw10:00000000 cdw11:00000000 00:26:25.657 [2024-04-18 09:54:48.843385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.657 [2024-04-18 09:54:48.843411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.657 09:54:48 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.657 [2024-04-18 09:54:48.843425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.657 [2024-04-18 09:54:48.843441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.657 [2024-04-18 09:54:48.843454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.657 [2024-04-18 09:54:48.843468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:26:25.657 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.657 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:25.657 [2024-04-18 09:54:48.846592] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:25.657 [2024-04-18 09:54:48.846646] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.657 [2024-04-18 09:54:48.846756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:26:25.657 09:54:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.657 09:54:48 -- host/discovery.sh@59 -- # sort 00:26:25.657 09:54:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.657 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.657 09:54:48 -- host/discovery.sh@59 -- # xargs 00:26:25.657 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.657 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.657 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.657 09:54:48 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.657 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:25.657 09:54:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.657 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.657 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.657 09:54:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.657 09:54:48 -- host/discovery.sh@55 -- # sort 00:26:25.657 09:54:48 -- host/discovery.sh@55 -- # xargs 00:26:25.657 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.657 09:54:48 -- common/autotest_common.sh@904 -- # return 0 00:26:25.657 09:54:48 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.657 09:54:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:25.657 09:54:48 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:25.657 09:54:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.657 09:54:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.657 09:54:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.657 09:54:48 -- host/discovery.sh@63 -- # sort -n 00:26:25.657 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.657 09:54:48 -- host/discovery.sh@63 -- # xargs 00:26:25.657 09:54:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.657 09:54:49 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:26:25.657 09:54:49 -- common/autotest_common.sh@904 -- # return 0 00:26:25.657 09:54:49 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:25.657 09:54:49 -- host/discovery.sh@79 -- # expected_count=0 00:26:25.657 09:54:49 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.657 09:54:49 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.657 09:54:49 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.657 09:54:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.657 09:54:49 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.657 09:54:49 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:25.657 09:54:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.657 09:54:49 -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.657 09:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.657 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:26:25.658 09:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.916 09:54:49 -- host/discovery.sh@74 -- # notification_count=0 00:26:25.916 09:54:49 -- host/discovery.sh@75 -- # notify_id=2 00:26:25.916 09:54:49 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:25.916 09:54:49 -- common/autotest_common.sh@904 -- # return 0 00:26:25.916 09:54:49 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:25.916 09:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.916 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:26:25.916 09:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.916 09:54:49 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:25.916 09:54:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:25.916 09:54:49 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.916 09:54:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.916 09:54:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:25.916 09:54:49 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:25.916 09:54:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.916 09:54:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.916 09:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.916 09:54:49 -- host/discovery.sh@59 -- # sort 00:26:25.916 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:26:25.916 09:54:49 -- host/discovery.sh@59 -- # xargs 00:26:25.916 09:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.916 09:54:49 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:25.916 09:54:49 -- common/autotest_common.sh@904 -- # return 0 00:26:25.916 09:54:49 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:25.916 09:54:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:25.916 09:54:49 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.916 09:54:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.916 09:54:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:25.916 09:54:49 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:25.916 09:54:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.916 09:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.916 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:26:25.917 09:54:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.917 09:54:49 -- host/discovery.sh@55 -- # sort 00:26:25.917 09:54:49 -- host/discovery.sh@55 -- # xargs 00:26:25.917 09:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.917 09:54:49 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:25.917 09:54:49 -- common/autotest_common.sh@904 -- # return 0 00:26:25.917 09:54:49 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:25.917 09:54:49 -- host/discovery.sh@79 -- # expected_count=2 00:26:25.917 09:54:49 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.917 09:54:49 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.917 09:54:49 -- common/autotest_common.sh@901 -- # local max=10 00:26:25.917 09:54:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:25.917 09:54:49 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.917 09:54:49 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:25.917 09:54:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.917 09:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.917 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:26:25.917 09:54:49 -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.917 09:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.917 09:54:49 -- host/discovery.sh@74 -- # notification_count=2 00:26:25.917 09:54:49 -- host/discovery.sh@75 -- # notify_id=4 00:26:25.917 09:54:49 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:25.917 09:54:49 -- common/autotest_common.sh@904 -- # return 0 00:26:25.917 09:54:49 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.917 09:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.917 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 [2024-04-18 09:54:50.246539] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:27.293 [2024-04-18 09:54:50.246822] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:27.293 [2024-04-18 09:54:50.246902] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.293 [2024-04-18 09:54:50.252620] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:27.293 [2024-04-18 09:54:50.322581] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.293 [2024-04-18 09:54:50.322682] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.293 09:54:50 -- common/autotest_common.sh@638 -- # local es=0 00:26:27.293 09:54:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.293 09:54:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.293 09:54:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.293 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 request: 00:26:27.293 { 00:26:27.293 "name": "nvme", 00:26:27.293 "trtype": "tcp", 00:26:27.293 "traddr": "10.0.0.2", 00:26:27.293 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.293 "adrfam": "ipv4", 00:26:27.293 "trsvcid": "8009", 00:26:27.293 "wait_for_attach": true, 00:26:27.293 "method": "bdev_nvme_start_discovery", 00:26:27.293 "req_id": 1 00:26:27.293 } 00:26:27.293 Got JSON-RPC error response 00:26:27.293 response: 00:26:27.293 { 00:26:27.293 "code": -17, 00:26:27.293 "message": "File exists" 00:26:27.293 } 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:27.293 09:54:50 -- common/autotest_common.sh@641 -- # es=1 00:26:27.293 09:54:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:27.293 09:54:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:27.293 09:54:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:27.293 09:54:50 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # xargs 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # sort 00:26:27.293 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:27.293 09:54:50 -- host/discovery.sh@146 -- # get_bdev_list 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # sort 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # xargs 00:26:27.293 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.293 09:54:50 -- common/autotest_common.sh@638 -- # local es=0 00:26:27.293 09:54:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.293 09:54:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.293 09:54:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.293 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 request: 00:26:27.293 { 00:26:27.293 "name": "nvme_second", 00:26:27.293 "trtype": "tcp", 00:26:27.293 "traddr": "10.0.0.2", 00:26:27.293 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.293 "adrfam": "ipv4", 00:26:27.293 "trsvcid": "8009", 00:26:27.293 "wait_for_attach": true, 00:26:27.293 "method": "bdev_nvme_start_discovery", 00:26:27.293 "req_id": 1 00:26:27.293 } 00:26:27.293 Got JSON-RPC error response 00:26:27.293 response: 00:26:27.293 { 00:26:27.293 "code": -17, 00:26:27.293 "message": "File exists" 00:26:27.293 } 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:27.293 09:54:50 -- common/autotest_common.sh@641 -- # es=1 00:26:27.293 09:54:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:27.293 09:54:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:27.293 09:54:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:27.293 09:54:50 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # sort 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.293 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 09:54:50 -- host/discovery.sh@67 -- # xargs 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:27.293 09:54:50 -- host/discovery.sh@152 -- # get_bdev_list 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.293 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # xargs 00:26:27.293 09:54:50 -- host/discovery.sh@55 -- # sort 00:26:27.293 09:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.293 09:54:50 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.293 09:54:50 -- common/autotest_common.sh@638 -- # local es=0 00:26:27.293 09:54:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.293 09:54:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:27.293 09:54:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.293 09:54:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.293 09:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.294 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:26:28.228 [2024-04-18 09:54:51.587522] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-04-18 09:54:51.587901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-04-18 09:54:51.587981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-04-18 09:54:51.588010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000de40 with addr=10.0.0.2, port=8010 00:26:28.228 [2024-04-18 09:54:51.588078] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:28.228 [2024-04-18 09:54:51.588102] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:28.228 [2024-04-18 09:54:51.588118] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:29.605 [2024-04-18 09:54:52.587562] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.605 [2024-04-18 09:54:52.587705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.605 [2024-04-18 09:54:52.587765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.605 [2024-04-18 09:54:52.587791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000e040 with addr=10.0.0.2, port=8010 00:26:29.605 [2024-04-18 09:54:52.587857] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:29.605 [2024-04-18 09:54:52.587873] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:29.605 [2024-04-18 09:54:52.587888] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:30.554 [2024-04-18 09:54:53.587244] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:30.554 request: 00:26:30.554 { 00:26:30.554 "name": "nvme_second", 00:26:30.554 "trtype": "tcp", 00:26:30.554 "traddr": "10.0.0.2", 00:26:30.554 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.554 "adrfam": "ipv4", 00:26:30.554 "trsvcid": "8010", 00:26:30.554 "attach_timeout_ms": 3000, 00:26:30.554 "method": "bdev_nvme_start_discovery", 00:26:30.554 "req_id": 1 00:26:30.554 } 00:26:30.554 Got JSON-RPC error response 00:26:30.554 response: 00:26:30.554 { 00:26:30.554 "code": -110, 00:26:30.554 "message": "Connection timed out" 00:26:30.554 } 00:26:30.554 09:54:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:30.554 09:54:53 -- common/autotest_common.sh@641 -- # es=1 00:26:30.554 09:54:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:30.554 09:54:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:30.554 09:54:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:30.554 09:54:53 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:30.554 09:54:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.554 09:54:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.554 09:54:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.554 09:54:53 -- host/discovery.sh@67 -- # xargs 00:26:30.554 09:54:53 -- host/discovery.sh@67 -- # sort 00:26:30.554 09:54:53 -- common/autotest_common.sh@10 -- # set +x 00:26:30.554 09:54:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.554 09:54:53 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:30.554 09:54:53 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:30.554 09:54:53 -- host/discovery.sh@161 -- # kill 76516 00:26:30.554 09:54:53 -- host/discovery.sh@162 -- # nvmftestfini 00:26:30.554 09:54:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:30.554 09:54:53 -- nvmf/common.sh@117 -- # sync 00:26:30.554 09:54:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.554 09:54:53 -- nvmf/common.sh@120 -- # set +e 00:26:30.554 09:54:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.554 09:54:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.554 rmmod nvme_tcp 00:26:30.554 rmmod nvme_fabrics 00:26:30.554 rmmod nvme_keyring 00:26:30.554 09:54:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.554 09:54:53 -- nvmf/common.sh@124 -- # set -e 00:26:30.554 09:54:53 -- nvmf/common.sh@125 -- # return 0 00:26:30.554 09:54:53 -- nvmf/common.sh@478 -- # '[' -n 76484 ']' 00:26:30.554 09:54:53 -- nvmf/common.sh@479 -- # killprocess 76484 00:26:30.554 09:54:53 -- common/autotest_common.sh@936 -- # '[' -z 76484 ']' 00:26:30.554 09:54:53 -- common/autotest_common.sh@940 -- # kill -0 76484 00:26:30.554 09:54:53 -- common/autotest_common.sh@941 -- # uname 00:26:30.554 09:54:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.554 09:54:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76484 00:26:30.554 09:54:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:30.554 killing process with pid 76484 00:26:30.554 09:54:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:30.554 09:54:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76484' 00:26:30.554 09:54:53 -- common/autotest_common.sh@955 -- # kill 76484 00:26:30.554 09:54:53 -- common/autotest_common.sh@960 -- # wait 76484 00:26:31.992 09:54:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:31.992 09:54:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:31.992 09:54:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:31.992 09:54:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.992 09:54:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:31.992 09:54:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.992 09:54:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.992 09:54:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.992 09:54:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:31.992 ************************************ 00:26:31.992 END TEST nvmf_discovery 00:26:31.992 ************************************ 00:26:31.992 00:26:31.992 real 0m10.847s 00:26:31.992 user 0m20.799s 00:26:31.992 sys 0m2.026s 00:26:31.992 09:54:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:31.992 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:26:31.992 09:54:55 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.992 09:54:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:31.992 09:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.992 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:26:31.992 ************************************ 00:26:31.992 START TEST nvmf_discovery_remove_ifc 00:26:31.992 ************************************ 00:26:31.992 09:54:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.992 * Looking for test storage... 00:26:31.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.992 09:54:55 -- nvmf/common.sh@7 -- # uname -s 00:26:31.992 09:54:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.992 09:54:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.992 09:54:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.992 09:54:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.992 09:54:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.992 09:54:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.992 09:54:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.992 09:54:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.992 09:54:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.992 09:54:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.992 09:54:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:31.992 09:54:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:31.992 09:54:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.992 09:54:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.992 09:54:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.992 09:54:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.992 09:54:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.992 09:54:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.992 09:54:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.992 09:54:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.992 09:54:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.992 09:54:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.992 09:54:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.992 09:54:55 -- paths/export.sh@5 -- # export PATH 00:26:31.992 09:54:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.992 09:54:55 -- nvmf/common.sh@47 -- # : 0 00:26:31.992 09:54:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.992 09:54:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.992 09:54:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.992 09:54:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.992 09:54:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.992 09:54:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.992 09:54:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.992 09:54:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:31.992 09:54:55 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:31.992 09:54:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:31.992 09:54:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.992 09:54:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:31.992 09:54:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:31.992 09:54:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:31.992 09:54:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.992 09:54:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.992 09:54:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.992 09:54:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:31.992 09:54:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:31.992 09:54:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:31.992 09:54:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:31.992 09:54:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:31.992 09:54:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:31.992 09:54:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.992 09:54:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.992 09:54:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:31.992 09:54:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:31.992 09:54:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.992 09:54:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.992 09:54:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.992 09:54:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.992 09:54:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.992 09:54:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.992 09:54:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.992 09:54:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.992 09:54:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:31.992 09:54:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:31.992 Cannot find device "nvmf_tgt_br" 00:26:31.993 09:54:55 -- nvmf/common.sh@155 -- # true 00:26:31.993 09:54:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.993 Cannot find device "nvmf_tgt_br2" 00:26:31.993 09:54:55 -- nvmf/common.sh@156 -- # true 00:26:31.993 09:54:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:31.993 09:54:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:31.993 Cannot find device "nvmf_tgt_br" 00:26:31.993 09:54:55 -- nvmf/common.sh@158 -- # true 00:26:31.993 09:54:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:31.993 Cannot find device "nvmf_tgt_br2" 00:26:31.993 09:54:55 -- nvmf/common.sh@159 -- # true 00:26:31.993 09:54:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:31.993 09:54:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:31.993 09:54:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.993 09:54:55 -- nvmf/common.sh@162 -- # true 00:26:31.993 09:54:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.993 09:54:55 -- nvmf/common.sh@163 -- # true 00:26:31.993 09:54:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.993 09:54:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.993 09:54:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.993 09:54:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.993 09:54:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:32.254 09:54:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:32.254 09:54:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:32.254 09:54:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:32.254 09:54:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:32.254 09:54:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:32.254 09:54:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:32.254 09:54:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:32.254 09:54:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:32.254 09:54:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:32.254 09:54:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:32.254 09:54:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:32.254 09:54:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:32.254 09:54:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:32.254 09:54:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:32.254 09:54:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:32.254 09:54:55 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:32.254 09:54:55 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:32.254 09:54:55 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:32.254 09:54:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:32.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:26:32.254 00:26:32.254 --- 10.0.0.2 ping statistics --- 00:26:32.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.254 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:26:32.254 09:54:55 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:32.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:32.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:26:32.254 00:26:32.254 --- 10.0.0.3 ping statistics --- 00:26:32.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.254 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:32.254 09:54:55 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:32.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:26:32.254 00:26:32.254 --- 10.0.0.1 ping statistics --- 00:26:32.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.254 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:32.254 09:54:55 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.254 09:54:55 -- nvmf/common.sh@422 -- # return 0 00:26:32.254 09:54:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:32.254 09:54:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.254 09:54:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:32.254 09:54:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:32.254 09:54:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.254 09:54:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:32.254 09:54:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:32.254 09:54:55 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:32.254 09:54:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:32.254 09:54:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:32.254 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:26:32.254 09:54:55 -- nvmf/common.sh@470 -- # nvmfpid=76986 00:26:32.254 09:54:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:32.254 09:54:55 -- nvmf/common.sh@471 -- # waitforlisten 76986 00:26:32.254 09:54:55 -- common/autotest_common.sh@817 -- # '[' -z 76986 ']' 00:26:32.254 09:54:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.254 09:54:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:32.254 09:54:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.254 09:54:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:32.254 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:26:32.513 [2024-04-18 09:54:55.696074] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:32.513 [2024-04-18 09:54:55.696331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.513 [2024-04-18 09:54:55.876376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.770 [2024-04-18 09:54:56.150988] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.770 [2024-04-18 09:54:56.151074] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.770 [2024-04-18 09:54:56.151096] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.770 [2024-04-18 09:54:56.151142] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.770 [2024-04-18 09:54:56.151163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.770 [2024-04-18 09:54:56.151219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.337 09:54:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:33.337 09:54:56 -- common/autotest_common.sh@850 -- # return 0 00:26:33.337 09:54:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:33.337 09:54:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:33.337 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:26:33.595 09:54:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.595 09:54:56 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:33.595 09:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.595 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:26:33.595 [2024-04-18 09:54:56.779613] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.595 [2024-04-18 09:54:56.787786] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:33.595 null0 00:26:33.595 [2024-04-18 09:54:56.819839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.595 09:54:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.595 09:54:56 -- host/discovery_remove_ifc.sh@59 -- # hostpid=77018 00:26:33.595 09:54:56 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77018 /tmp/host.sock 00:26:33.595 09:54:56 -- common/autotest_common.sh@817 -- # '[' -z 77018 ']' 00:26:33.595 09:54:56 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:33.595 09:54:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:33.595 09:54:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:33.595 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:33.595 09:54:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:33.595 09:54:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:33.595 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:26:33.595 [2024-04-18 09:54:56.958305] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:33.595 [2024-04-18 09:54:56.958484] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77018 ] 00:26:33.853 [2024-04-18 09:54:57.135823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.111 [2024-04-18 09:54:57.414687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.676 09:54:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:34.676 09:54:57 -- common/autotest_common.sh@850 -- # return 0 00:26:34.676 09:54:57 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:34.676 09:54:57 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:34.676 09:54:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.676 09:54:57 -- common/autotest_common.sh@10 -- # set +x 00:26:34.676 09:54:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.676 09:54:57 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:34.676 09:54:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.676 09:54:57 -- common/autotest_common.sh@10 -- # set +x 00:26:34.934 09:54:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.934 09:54:58 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:34.934 09:54:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.934 09:54:58 -- common/autotest_common.sh@10 -- # set +x 00:26:35.867 [2024-04-18 09:54:59.256121] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.867 [2024-04-18 09:54:59.256202] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.867 [2024-04-18 09:54:59.256242] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:36.125 [2024-04-18 09:54:59.262244] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:36.125 [2024-04-18 09:54:59.326850] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:36.125 [2024-04-18 09:54:59.326984] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:36.125 [2024-04-18 09:54:59.327060] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:36.125 [2024-04-18 09:54:59.327104] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:36.125 [2024-04-18 09:54:59.327172] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:36.125 09:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.125 09:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.125 09:54:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.125 [2024-04-18 09:54:59.334797] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006a40 was disconnected and freed. delete nvme_qpair. 00:26:36.125 09:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.125 09:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.125 09:54:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.125 09:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.125 09:54:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.497 09:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.497 09:55:00 -- common/autotest_common.sh@10 -- # set +x 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.497 09:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.497 09:55:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.429 09:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.429 09:55:01 -- common/autotest_common.sh@10 -- # set +x 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.429 09:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.429 09:55:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.361 09:55:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.361 09:55:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.362 09:55:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.362 09:55:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.362 09:55:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.362 09:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.362 09:55:02 -- common/autotest_common.sh@10 -- # set +x 00:26:39.362 09:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.362 09:55:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.362 09:55:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.298 09:55:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.298 09:55:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.298 09:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.298 09:55:03 -- common/autotest_common.sh@10 -- # set +x 00:26:40.298 09:55:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.298 09:55:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.298 09:55:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.556 09:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.556 09:55:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.556 09:55:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.509 09:55:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.509 09:55:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.509 09:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.509 09:55:04 -- common/autotest_common.sh@10 -- # set +x 00:26:41.509 09:55:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.509 09:55:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.509 09:55:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.509 [2024-04-18 09:55:04.753968] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:41.509 [2024-04-18 09:55:04.754338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.509 [2024-04-18 09:55:04.754534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.509 [2024-04-18 09:55:04.754910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.509 [2024-04-18 09:55:04.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.509 [2024-04-18 09:55:04.755334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.509 [2024-04-18 09:55:04.755362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.509 [2024-04-18 09:55:04.755382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.510 [2024-04-18 09:55:04.755400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.510 [2024-04-18 09:55:04.755419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.510 [2024-04-18 09:55:04.755447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.510 [2024-04-18 09:55:04.755464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005840 is same with the state(5) to be set 00:26:41.510 09:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.510 [2024-04-18 09:55:04.763953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005840 (9): Bad file descriptor 00:26:41.510 [2024-04-18 09:55:04.773997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.510 09:55:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.510 09:55:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.444 09:55:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.444 [2024-04-18 09:55:05.799201] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:26:42.444 09:55:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.444 09:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.444 09:55:05 -- common/autotest_common.sh@10 -- # set +x 00:26:42.444 09:55:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.444 09:55:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.444 09:55:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.817 [2024-04-18 09:55:06.820201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:44.807 [2024-04-18 09:55:07.844231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:44.807 [2024-04-18 09:55:07.844402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005840 with addr=10.0.0.2, port=4420 00:26:44.807 [2024-04-18 09:55:07.844449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005840 is same with the state(5) to be set 00:26:44.807 [2024-04-18 09:55:07.845466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005840 (9): Bad file descriptor 00:26:44.807 [2024-04-18 09:55:07.845554] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.807 [2024-04-18 09:55:07.845653] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:44.807 [2024-04-18 09:55:07.845755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.807 [2024-04-18 09:55:07.845792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.807 [2024-04-18 09:55:07.845829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.807 [2024-04-18 09:55:07.845854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.807 [2024-04-18 09:55:07.845880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.807 [2024-04-18 09:55:07.845905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.807 [2024-04-18 09:55:07.845929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.807 [2024-04-18 09:55:07.845952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.807 [2024-04-18 09:55:07.845977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.807 [2024-04-18 09:55:07.846002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.807 [2024-04-18 09:55:07.846027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:44.807 [2024-04-18 09:55:07.846068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:44.807 [2024-04-18 09:55:07.846592] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:44.807 [2024-04-18 09:55:07.846647] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:44.807 09:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.807 09:55:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.807 09:55:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.739 09:55:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.740 09:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.740 09:55:08 -- common/autotest_common.sh@10 -- # set +x 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.740 09:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.740 09:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.740 09:55:08 -- common/autotest_common.sh@10 -- # set +x 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.740 09:55:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.740 09:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.740 09:55:09 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:45.740 09:55:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.672 [2024-04-18 09:55:09.853683] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:46.672 [2024-04-18 09:55:09.853739] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:46.672 [2024-04-18 09:55:09.853785] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:46.672 [2024-04-18 09:55:09.859800] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:46.672 [2024-04-18 09:55:09.925791] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:46.672 [2024-04-18 09:55:09.925897] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:46.672 [2024-04-18 09:55:09.925962] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:46.672 [2024-04-18 09:55:09.925993] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:46.672 [2024-04-18 09:55:09.926011] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:46.672 [2024-04-18 09:55:09.933431] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a240 was disconnected and freed. delete nvme_qpair. 00:26:46.672 09:55:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.672 09:55:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.672 09:55:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.672 09:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.672 09:55:10 -- common/autotest_common.sh@10 -- # set +x 00:26:46.672 09:55:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.672 09:55:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.672 09:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.930 09:55:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:46.930 09:55:10 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:46.930 09:55:10 -- host/discovery_remove_ifc.sh@90 -- # killprocess 77018 00:26:46.930 09:55:10 -- common/autotest_common.sh@936 -- # '[' -z 77018 ']' 00:26:46.930 09:55:10 -- common/autotest_common.sh@940 -- # kill -0 77018 00:26:46.931 09:55:10 -- common/autotest_common.sh@941 -- # uname 00:26:46.931 09:55:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:46.931 09:55:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77018 00:26:46.931 killing process with pid 77018 00:26:46.931 09:55:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:46.931 09:55:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:46.931 09:55:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77018' 00:26:46.931 09:55:10 -- common/autotest_common.sh@955 -- # kill 77018 00:26:46.931 09:55:10 -- common/autotest_common.sh@960 -- # wait 77018 00:26:48.314 09:55:11 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:48.314 09:55:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:48.314 09:55:11 -- nvmf/common.sh@117 -- # sync 00:26:48.314 09:55:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.314 09:55:11 -- nvmf/common.sh@120 -- # set +e 00:26:48.314 09:55:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.314 09:55:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.314 rmmod nvme_tcp 00:26:48.314 rmmod nvme_fabrics 00:26:48.315 rmmod nvme_keyring 00:26:48.315 09:55:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.315 09:55:11 -- nvmf/common.sh@124 -- # set -e 00:26:48.315 09:55:11 -- nvmf/common.sh@125 -- # return 0 00:26:48.315 09:55:11 -- nvmf/common.sh@478 -- # '[' -n 76986 ']' 00:26:48.315 09:55:11 -- nvmf/common.sh@479 -- # killprocess 76986 00:26:48.315 09:55:11 -- common/autotest_common.sh@936 -- # '[' -z 76986 ']' 00:26:48.315 09:55:11 -- common/autotest_common.sh@940 -- # kill -0 76986 00:26:48.315 09:55:11 -- common/autotest_common.sh@941 -- # uname 00:26:48.315 09:55:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:48.315 09:55:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76986 00:26:48.315 killing process with pid 76986 00:26:48.315 09:55:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:48.315 09:55:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:48.315 09:55:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76986' 00:26:48.315 09:55:11 -- common/autotest_common.sh@955 -- # kill 76986 00:26:48.315 09:55:11 -- common/autotest_common.sh@960 -- # wait 76986 00:26:49.687 09:55:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:49.687 09:55:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:49.687 09:55:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.687 09:55:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.687 09:55:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.687 09:55:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.687 09:55:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:49.687 ************************************ 00:26:49.687 END TEST nvmf_discovery_remove_ifc 00:26:49.687 ************************************ 00:26:49.687 00:26:49.687 real 0m17.573s 00:26:49.687 user 0m27.917s 00:26:49.687 sys 0m2.790s 00:26:49.687 09:55:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:49.687 09:55:12 -- common/autotest_common.sh@10 -- # set +x 00:26:49.687 09:55:12 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:49.687 09:55:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:49.687 09:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:49.687 09:55:12 -- common/autotest_common.sh@10 -- # set +x 00:26:49.687 ************************************ 00:26:49.687 START TEST nvmf_identify_kernel_target 00:26:49.687 ************************************ 00:26:49.687 09:55:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:49.687 * Looking for test storage... 00:26:49.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:49.687 09:55:12 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.687 09:55:12 -- nvmf/common.sh@7 -- # uname -s 00:26:49.687 09:55:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.687 09:55:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.687 09:55:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.687 09:55:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.687 09:55:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.687 09:55:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.687 09:55:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.687 09:55:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.687 09:55:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.687 09:55:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:49.687 09:55:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:49.687 09:55:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.687 09:55:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.687 09:55:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.687 09:55:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.687 09:55:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.687 09:55:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.687 09:55:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.687 09:55:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.687 09:55:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.687 09:55:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.687 09:55:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.687 09:55:12 -- paths/export.sh@5 -- # export PATH 00:26:49.687 09:55:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.687 09:55:12 -- nvmf/common.sh@47 -- # : 0 00:26:49.687 09:55:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.687 09:55:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.687 09:55:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.687 09:55:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.687 09:55:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.687 09:55:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.687 09:55:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.687 09:55:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.687 09:55:12 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:49.687 09:55:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:49.687 09:55:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.687 09:55:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:49.687 09:55:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:49.687 09:55:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:49.687 09:55:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.687 09:55:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.687 09:55:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.687 09:55:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:49.687 09:55:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:49.687 09:55:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.687 09:55:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.687 09:55:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:49.687 09:55:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:49.688 09:55:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:49.688 09:55:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:49.688 09:55:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:49.688 09:55:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.688 09:55:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:49.688 09:55:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:49.688 09:55:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:49.688 09:55:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:49.688 09:55:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:49.688 09:55:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:49.688 Cannot find device "nvmf_tgt_br" 00:26:49.688 09:55:12 -- nvmf/common.sh@155 -- # true 00:26:49.688 09:55:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.688 Cannot find device "nvmf_tgt_br2" 00:26:49.688 09:55:12 -- nvmf/common.sh@156 -- # true 00:26:49.688 09:55:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:49.688 09:55:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:49.688 Cannot find device "nvmf_tgt_br" 00:26:49.688 09:55:12 -- nvmf/common.sh@158 -- # true 00:26:49.688 09:55:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:49.688 Cannot find device "nvmf_tgt_br2" 00:26:49.688 09:55:12 -- nvmf/common.sh@159 -- # true 00:26:49.688 09:55:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:49.688 09:55:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:49.688 09:55:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.688 09:55:12 -- nvmf/common.sh@162 -- # true 00:26:49.688 09:55:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:49.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.688 09:55:13 -- nvmf/common.sh@163 -- # true 00:26:49.688 09:55:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:49.688 09:55:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:49.688 09:55:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:49.688 09:55:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:49.688 09:55:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:49.688 09:55:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:49.688 09:55:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:49.688 09:55:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:49.947 09:55:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:49.947 09:55:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:49.947 09:55:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:49.947 09:55:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:49.947 09:55:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:49.947 09:55:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:49.947 09:55:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:49.947 09:55:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:49.947 09:55:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:49.947 09:55:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:49.947 09:55:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:49.947 09:55:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:49.947 09:55:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:49.947 09:55:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:49.947 09:55:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:49.947 09:55:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:49.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:26:49.947 00:26:49.947 --- 10.0.0.2 ping statistics --- 00:26:49.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.947 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:49.947 09:55:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:49.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:49.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:26:49.947 00:26:49.947 --- 10.0.0.3 ping statistics --- 00:26:49.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.947 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:26:49.947 09:55:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:49.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:26:49.947 00:26:49.947 --- 10.0.0.1 ping statistics --- 00:26:49.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.947 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:49.947 09:55:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.947 09:55:13 -- nvmf/common.sh@422 -- # return 0 00:26:49.947 09:55:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:49.947 09:55:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.947 09:55:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.947 09:55:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:49.947 09:55:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:49.947 09:55:13 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:49.947 09:55:13 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:49.947 09:55:13 -- nvmf/common.sh@717 -- # local ip 00:26:49.947 09:55:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:49.947 09:55:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:49.947 09:55:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.947 09:55:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.947 09:55:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:49.947 09:55:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:49.947 09:55:13 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:49.947 09:55:13 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:49.947 09:55:13 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:49.947 09:55:13 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:49.947 09:55:13 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.947 09:55:13 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.947 09:55:13 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:49.947 09:55:13 -- nvmf/common.sh@628 -- # local block nvme 00:26:49.947 09:55:13 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:49.947 09:55:13 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:49.947 09:55:13 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:50.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:50.205 Waiting for block devices as requested 00:26:50.205 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:50.463 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:50.463 09:55:13 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:50.463 09:55:13 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:50.463 09:55:13 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:50.463 09:55:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:50.463 09:55:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:50.463 09:55:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:50.463 09:55:13 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:50.463 09:55:13 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:50.463 09:55:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:50.463 No valid GPT data, bailing 00:26:50.463 09:55:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:50.463 09:55:13 -- scripts/common.sh@391 -- # pt= 00:26:50.463 09:55:13 -- scripts/common.sh@392 -- # return 1 00:26:50.463 09:55:13 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:50.463 09:55:13 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:50.463 09:55:13 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:50.463 09:55:13 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:26:50.463 09:55:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:50.463 09:55:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:50.463 09:55:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:50.463 09:55:13 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:26:50.463 09:55:13 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:50.463 09:55:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:50.721 No valid GPT data, bailing 00:26:50.721 09:55:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:50.721 09:55:13 -- scripts/common.sh@391 -- # pt= 00:26:50.721 09:55:13 -- scripts/common.sh@392 -- # return 1 00:26:50.721 09:55:13 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:26:50.721 09:55:13 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:50.721 09:55:13 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:50.721 09:55:13 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:26:50.721 09:55:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:50.721 09:55:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:50.721 09:55:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:50.721 09:55:13 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:26:50.721 09:55:13 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:50.721 09:55:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:50.721 No valid GPT data, bailing 00:26:50.721 09:55:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:50.721 09:55:13 -- scripts/common.sh@391 -- # pt= 00:26:50.721 09:55:13 -- scripts/common.sh@392 -- # return 1 00:26:50.721 09:55:13 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:26:50.721 09:55:13 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:50.721 09:55:13 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:50.721 09:55:13 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:26:50.721 09:55:13 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:50.721 09:55:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:50.721 09:55:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:50.721 09:55:13 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:26:50.721 09:55:13 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:50.721 09:55:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:50.721 No valid GPT data, bailing 00:26:50.721 09:55:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:50.721 09:55:14 -- scripts/common.sh@391 -- # pt= 00:26:50.722 09:55:14 -- scripts/common.sh@392 -- # return 1 00:26:50.722 09:55:14 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:26:50.722 09:55:14 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:26:50.722 09:55:14 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:50.722 09:55:14 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:50.722 09:55:14 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:50.722 09:55:14 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:50.722 09:55:14 -- nvmf/common.sh@656 -- # echo 1 00:26:50.722 09:55:14 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:26:50.722 09:55:14 -- nvmf/common.sh@658 -- # echo 1 00:26:50.722 09:55:14 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:50.722 09:55:14 -- nvmf/common.sh@661 -- # echo tcp 00:26:50.722 09:55:14 -- nvmf/common.sh@662 -- # echo 4420 00:26:50.722 09:55:14 -- nvmf/common.sh@663 -- # echo ipv4 00:26:50.722 09:55:14 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:50.722 09:55:14 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -a 10.0.0.1 -t tcp -s 4420 00:26:50.722 00:26:50.722 Discovery Log Number of Records 2, Generation counter 2 00:26:50.722 =====Discovery Log Entry 0====== 00:26:50.722 trtype: tcp 00:26:50.722 adrfam: ipv4 00:26:50.722 subtype: current discovery subsystem 00:26:50.722 treq: not specified, sq flow control disable supported 00:26:50.722 portid: 1 00:26:50.722 trsvcid: 4420 00:26:50.722 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:50.722 traddr: 10.0.0.1 00:26:50.722 eflags: none 00:26:50.722 sectype: none 00:26:50.722 =====Discovery Log Entry 1====== 00:26:50.722 trtype: tcp 00:26:50.722 adrfam: ipv4 00:26:50.722 subtype: nvme subsystem 00:26:50.722 treq: not specified, sq flow control disable supported 00:26:50.722 portid: 1 00:26:50.722 trsvcid: 4420 00:26:50.722 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:50.722 traddr: 10.0.0.1 00:26:50.722 eflags: none 00:26:50.722 sectype: none 00:26:50.722 09:55:14 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:50.722 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:50.981 ===================================================== 00:26:50.981 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:50.981 ===================================================== 00:26:50.981 Controller Capabilities/Features 00:26:50.981 ================================ 00:26:50.981 Vendor ID: 0000 00:26:50.981 Subsystem Vendor ID: 0000 00:26:50.981 Serial Number: abc0e90031bb2e563098 00:26:50.981 Model Number: Linux 00:26:50.981 Firmware Version: 6.7.0-68 00:26:50.981 Recommended Arb Burst: 0 00:26:50.981 IEEE OUI Identifier: 00 00 00 00:26:50.981 Multi-path I/O 00:26:50.981 May have multiple subsystem ports: No 00:26:50.981 May have multiple controllers: No 00:26:50.981 Associated with SR-IOV VF: No 00:26:50.981 Max Data Transfer Size: Unlimited 00:26:50.981 Max Number of Namespaces: 0 00:26:50.981 Max Number of I/O Queues: 1024 00:26:50.981 NVMe Specification Version (VS): 1.3 00:26:50.981 NVMe Specification Version (Identify): 1.3 00:26:50.981 Maximum Queue Entries: 1024 00:26:50.981 Contiguous Queues Required: No 00:26:50.981 Arbitration Mechanisms Supported 00:26:50.981 Weighted Round Robin: Not Supported 00:26:50.981 Vendor Specific: Not Supported 00:26:50.981 Reset Timeout: 7500 ms 00:26:50.981 Doorbell Stride: 4 bytes 00:26:50.981 NVM Subsystem Reset: Not Supported 00:26:50.981 Command Sets Supported 00:26:50.981 NVM Command Set: Supported 00:26:50.981 Boot Partition: Not Supported 00:26:50.981 Memory Page Size Minimum: 4096 bytes 00:26:50.981 Memory Page Size Maximum: 4096 bytes 00:26:50.981 Persistent Memory Region: Not Supported 00:26:50.981 Optional Asynchronous Events Supported 00:26:50.981 Namespace Attribute Notices: Not Supported 00:26:50.981 Firmware Activation Notices: Not Supported 00:26:50.981 ANA Change Notices: Not Supported 00:26:50.981 PLE Aggregate Log Change Notices: Not Supported 00:26:50.981 LBA Status Info Alert Notices: Not Supported 00:26:50.981 EGE Aggregate Log Change Notices: Not Supported 00:26:50.981 Normal NVM Subsystem Shutdown event: Not Supported 00:26:50.981 Zone Descriptor Change Notices: Not Supported 00:26:50.981 Discovery Log Change Notices: Supported 00:26:50.981 Controller Attributes 00:26:50.981 128-bit Host Identifier: Not Supported 00:26:50.981 Non-Operational Permissive Mode: Not Supported 00:26:50.981 NVM Sets: Not Supported 00:26:50.981 Read Recovery Levels: Not Supported 00:26:50.981 Endurance Groups: Not Supported 00:26:50.981 Predictable Latency Mode: Not Supported 00:26:50.981 Traffic Based Keep ALive: Not Supported 00:26:50.981 Namespace Granularity: Not Supported 00:26:50.981 SQ Associations: Not Supported 00:26:50.981 UUID List: Not Supported 00:26:50.981 Multi-Domain Subsystem: Not Supported 00:26:50.981 Fixed Capacity Management: Not Supported 00:26:50.981 Variable Capacity Management: Not Supported 00:26:50.981 Delete Endurance Group: Not Supported 00:26:50.981 Delete NVM Set: Not Supported 00:26:50.981 Extended LBA Formats Supported: Not Supported 00:26:50.981 Flexible Data Placement Supported: Not Supported 00:26:50.981 00:26:50.981 Controller Memory Buffer Support 00:26:50.981 ================================ 00:26:50.981 Supported: No 00:26:50.981 00:26:50.981 Persistent Memory Region Support 00:26:50.981 ================================ 00:26:50.981 Supported: No 00:26:50.981 00:26:50.981 Admin Command Set Attributes 00:26:50.981 ============================ 00:26:50.981 Security Send/Receive: Not Supported 00:26:50.981 Format NVM: Not Supported 00:26:50.981 Firmware Activate/Download: Not Supported 00:26:50.981 Namespace Management: Not Supported 00:26:50.981 Device Self-Test: Not Supported 00:26:50.981 Directives: Not Supported 00:26:50.981 NVMe-MI: Not Supported 00:26:50.981 Virtualization Management: Not Supported 00:26:50.981 Doorbell Buffer Config: Not Supported 00:26:50.981 Get LBA Status Capability: Not Supported 00:26:50.981 Command & Feature Lockdown Capability: Not Supported 00:26:50.981 Abort Command Limit: 1 00:26:50.981 Async Event Request Limit: 1 00:26:50.981 Number of Firmware Slots: N/A 00:26:50.981 Firmware Slot 1 Read-Only: N/A 00:26:50.981 Firmware Activation Without Reset: N/A 00:26:50.981 Multiple Update Detection Support: N/A 00:26:50.981 Firmware Update Granularity: No Information Provided 00:26:50.981 Per-Namespace SMART Log: No 00:26:50.981 Asymmetric Namespace Access Log Page: Not Supported 00:26:50.981 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:50.981 Command Effects Log Page: Not Supported 00:26:50.981 Get Log Page Extended Data: Supported 00:26:50.981 Telemetry Log Pages: Not Supported 00:26:50.981 Persistent Event Log Pages: Not Supported 00:26:50.981 Supported Log Pages Log Page: May Support 00:26:50.981 Commands Supported & Effects Log Page: Not Supported 00:26:50.981 Feature Identifiers & Effects Log Page:May Support 00:26:50.981 NVMe-MI Commands & Effects Log Page: May Support 00:26:50.981 Data Area 4 for Telemetry Log: Not Supported 00:26:50.981 Error Log Page Entries Supported: 1 00:26:50.981 Keep Alive: Not Supported 00:26:50.981 00:26:50.981 NVM Command Set Attributes 00:26:50.981 ========================== 00:26:50.981 Submission Queue Entry Size 00:26:50.981 Max: 1 00:26:50.981 Min: 1 00:26:50.981 Completion Queue Entry Size 00:26:50.981 Max: 1 00:26:50.981 Min: 1 00:26:50.981 Number of Namespaces: 0 00:26:50.981 Compare Command: Not Supported 00:26:50.981 Write Uncorrectable Command: Not Supported 00:26:50.981 Dataset Management Command: Not Supported 00:26:50.981 Write Zeroes Command: Not Supported 00:26:50.981 Set Features Save Field: Not Supported 00:26:50.981 Reservations: Not Supported 00:26:50.981 Timestamp: Not Supported 00:26:50.981 Copy: Not Supported 00:26:50.981 Volatile Write Cache: Not Present 00:26:50.981 Atomic Write Unit (Normal): 1 00:26:50.981 Atomic Write Unit (PFail): 1 00:26:50.981 Atomic Compare & Write Unit: 1 00:26:50.981 Fused Compare & Write: Not Supported 00:26:50.981 Scatter-Gather List 00:26:50.981 SGL Command Set: Supported 00:26:50.981 SGL Keyed: Not Supported 00:26:50.981 SGL Bit Bucket Descriptor: Not Supported 00:26:50.981 SGL Metadata Pointer: Not Supported 00:26:50.981 Oversized SGL: Not Supported 00:26:50.981 SGL Metadata Address: Not Supported 00:26:50.981 SGL Offset: Supported 00:26:50.981 Transport SGL Data Block: Not Supported 00:26:50.981 Replay Protected Memory Block: Not Supported 00:26:50.981 00:26:50.981 Firmware Slot Information 00:26:50.981 ========================= 00:26:50.981 Active slot: 0 00:26:50.981 00:26:50.981 00:26:50.981 Error Log 00:26:50.981 ========= 00:26:50.981 00:26:50.981 Active Namespaces 00:26:50.981 ================= 00:26:50.981 Discovery Log Page 00:26:50.981 ================== 00:26:50.981 Generation Counter: 2 00:26:50.981 Number of Records: 2 00:26:50.981 Record Format: 0 00:26:50.981 00:26:50.981 Discovery Log Entry 0 00:26:50.981 ---------------------- 00:26:50.981 Transport Type: 3 (TCP) 00:26:50.981 Address Family: 1 (IPv4) 00:26:50.981 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:50.981 Entry Flags: 00:26:50.981 Duplicate Returned Information: 0 00:26:50.981 Explicit Persistent Connection Support for Discovery: 0 00:26:50.981 Transport Requirements: 00:26:50.981 Secure Channel: Not Specified 00:26:50.981 Port ID: 1 (0x0001) 00:26:50.981 Controller ID: 65535 (0xffff) 00:26:50.982 Admin Max SQ Size: 32 00:26:50.982 Transport Service Identifier: 4420 00:26:50.982 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:50.982 Transport Address: 10.0.0.1 00:26:50.982 Discovery Log Entry 1 00:26:50.982 ---------------------- 00:26:50.982 Transport Type: 3 (TCP) 00:26:50.982 Address Family: 1 (IPv4) 00:26:50.982 Subsystem Type: 2 (NVM Subsystem) 00:26:50.982 Entry Flags: 00:26:50.982 Duplicate Returned Information: 0 00:26:50.982 Explicit Persistent Connection Support for Discovery: 0 00:26:50.982 Transport Requirements: 00:26:50.982 Secure Channel: Not Specified 00:26:50.982 Port ID: 1 (0x0001) 00:26:50.982 Controller ID: 65535 (0xffff) 00:26:50.982 Admin Max SQ Size: 32 00:26:50.982 Transport Service Identifier: 4420 00:26:50.982 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:50.982 Transport Address: 10.0.0.1 00:26:50.982 09:55:14 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:51.241 get_feature(0x01) failed 00:26:51.241 get_feature(0x02) failed 00:26:51.241 get_feature(0x04) failed 00:26:51.241 ===================================================== 00:26:51.241 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:51.241 ===================================================== 00:26:51.241 Controller Capabilities/Features 00:26:51.241 ================================ 00:26:51.241 Vendor ID: 0000 00:26:51.241 Subsystem Vendor ID: 0000 00:26:51.241 Serial Number: 1346a3a0e44398bfa617 00:26:51.241 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:51.241 Firmware Version: 6.7.0-68 00:26:51.241 Recommended Arb Burst: 6 00:26:51.241 IEEE OUI Identifier: 00 00 00 00:26:51.241 Multi-path I/O 00:26:51.241 May have multiple subsystem ports: Yes 00:26:51.241 May have multiple controllers: Yes 00:26:51.241 Associated with SR-IOV VF: No 00:26:51.241 Max Data Transfer Size: Unlimited 00:26:51.241 Max Number of Namespaces: 1024 00:26:51.241 Max Number of I/O Queues: 128 00:26:51.241 NVMe Specification Version (VS): 1.3 00:26:51.241 NVMe Specification Version (Identify): 1.3 00:26:51.241 Maximum Queue Entries: 1024 00:26:51.241 Contiguous Queues Required: No 00:26:51.241 Arbitration Mechanisms Supported 00:26:51.241 Weighted Round Robin: Not Supported 00:26:51.241 Vendor Specific: Not Supported 00:26:51.241 Reset Timeout: 7500 ms 00:26:51.241 Doorbell Stride: 4 bytes 00:26:51.241 NVM Subsystem Reset: Not Supported 00:26:51.241 Command Sets Supported 00:26:51.241 NVM Command Set: Supported 00:26:51.241 Boot Partition: Not Supported 00:26:51.241 Memory Page Size Minimum: 4096 bytes 00:26:51.241 Memory Page Size Maximum: 4096 bytes 00:26:51.241 Persistent Memory Region: Not Supported 00:26:51.241 Optional Asynchronous Events Supported 00:26:51.241 Namespace Attribute Notices: Supported 00:26:51.241 Firmware Activation Notices: Not Supported 00:26:51.241 ANA Change Notices: Supported 00:26:51.241 PLE Aggregate Log Change Notices: Not Supported 00:26:51.241 LBA Status Info Alert Notices: Not Supported 00:26:51.241 EGE Aggregate Log Change Notices: Not Supported 00:26:51.241 Normal NVM Subsystem Shutdown event: Not Supported 00:26:51.241 Zone Descriptor Change Notices: Not Supported 00:26:51.241 Discovery Log Change Notices: Not Supported 00:26:51.241 Controller Attributes 00:26:51.241 128-bit Host Identifier: Supported 00:26:51.241 Non-Operational Permissive Mode: Not Supported 00:26:51.241 NVM Sets: Not Supported 00:26:51.241 Read Recovery Levels: Not Supported 00:26:51.241 Endurance Groups: Not Supported 00:26:51.241 Predictable Latency Mode: Not Supported 00:26:51.241 Traffic Based Keep ALive: Supported 00:26:51.241 Namespace Granularity: Not Supported 00:26:51.241 SQ Associations: Not Supported 00:26:51.241 UUID List: Not Supported 00:26:51.241 Multi-Domain Subsystem: Not Supported 00:26:51.241 Fixed Capacity Management: Not Supported 00:26:51.241 Variable Capacity Management: Not Supported 00:26:51.241 Delete Endurance Group: Not Supported 00:26:51.241 Delete NVM Set: Not Supported 00:26:51.241 Extended LBA Formats Supported: Not Supported 00:26:51.241 Flexible Data Placement Supported: Not Supported 00:26:51.241 00:26:51.241 Controller Memory Buffer Support 00:26:51.241 ================================ 00:26:51.241 Supported: No 00:26:51.241 00:26:51.241 Persistent Memory Region Support 00:26:51.241 ================================ 00:26:51.241 Supported: No 00:26:51.241 00:26:51.241 Admin Command Set Attributes 00:26:51.241 ============================ 00:26:51.241 Security Send/Receive: Not Supported 00:26:51.241 Format NVM: Not Supported 00:26:51.241 Firmware Activate/Download: Not Supported 00:26:51.241 Namespace Management: Not Supported 00:26:51.241 Device Self-Test: Not Supported 00:26:51.241 Directives: Not Supported 00:26:51.241 NVMe-MI: Not Supported 00:26:51.241 Virtualization Management: Not Supported 00:26:51.241 Doorbell Buffer Config: Not Supported 00:26:51.241 Get LBA Status Capability: Not Supported 00:26:51.241 Command & Feature Lockdown Capability: Not Supported 00:26:51.241 Abort Command Limit: 4 00:26:51.241 Async Event Request Limit: 4 00:26:51.241 Number of Firmware Slots: N/A 00:26:51.241 Firmware Slot 1 Read-Only: N/A 00:26:51.241 Firmware Activation Without Reset: N/A 00:26:51.241 Multiple Update Detection Support: N/A 00:26:51.241 Firmware Update Granularity: No Information Provided 00:26:51.241 Per-Namespace SMART Log: Yes 00:26:51.242 Asymmetric Namespace Access Log Page: Supported 00:26:51.242 ANA Transition Time : 10 sec 00:26:51.242 00:26:51.242 Asymmetric Namespace Access Capabilities 00:26:51.242 ANA Optimized State : Supported 00:26:51.242 ANA Non-Optimized State : Supported 00:26:51.242 ANA Inaccessible State : Supported 00:26:51.242 ANA Persistent Loss State : Supported 00:26:51.242 ANA Change State : Supported 00:26:51.242 ANAGRPID is not changed : No 00:26:51.242 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:51.242 00:26:51.242 ANA Group Identifier Maximum : 128 00:26:51.242 Number of ANA Group Identifiers : 128 00:26:51.242 Max Number of Allowed Namespaces : 1024 00:26:51.242 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:51.242 Command Effects Log Page: Supported 00:26:51.242 Get Log Page Extended Data: Supported 00:26:51.242 Telemetry Log Pages: Not Supported 00:26:51.242 Persistent Event Log Pages: Not Supported 00:26:51.242 Supported Log Pages Log Page: May Support 00:26:51.242 Commands Supported & Effects Log Page: Not Supported 00:26:51.242 Feature Identifiers & Effects Log Page:May Support 00:26:51.242 NVMe-MI Commands & Effects Log Page: May Support 00:26:51.242 Data Area 4 for Telemetry Log: Not Supported 00:26:51.242 Error Log Page Entries Supported: 128 00:26:51.242 Keep Alive: Supported 00:26:51.242 Keep Alive Granularity: 1000 ms 00:26:51.242 00:26:51.242 NVM Command Set Attributes 00:26:51.242 ========================== 00:26:51.242 Submission Queue Entry Size 00:26:51.242 Max: 64 00:26:51.242 Min: 64 00:26:51.242 Completion Queue Entry Size 00:26:51.242 Max: 16 00:26:51.242 Min: 16 00:26:51.242 Number of Namespaces: 1024 00:26:51.242 Compare Command: Not Supported 00:26:51.242 Write Uncorrectable Command: Not Supported 00:26:51.242 Dataset Management Command: Supported 00:26:51.242 Write Zeroes Command: Supported 00:26:51.242 Set Features Save Field: Not Supported 00:26:51.242 Reservations: Not Supported 00:26:51.242 Timestamp: Not Supported 00:26:51.242 Copy: Not Supported 00:26:51.242 Volatile Write Cache: Present 00:26:51.242 Atomic Write Unit (Normal): 1 00:26:51.242 Atomic Write Unit (PFail): 1 00:26:51.242 Atomic Compare & Write Unit: 1 00:26:51.242 Fused Compare & Write: Not Supported 00:26:51.242 Scatter-Gather List 00:26:51.242 SGL Command Set: Supported 00:26:51.242 SGL Keyed: Not Supported 00:26:51.242 SGL Bit Bucket Descriptor: Not Supported 00:26:51.242 SGL Metadata Pointer: Not Supported 00:26:51.242 Oversized SGL: Not Supported 00:26:51.242 SGL Metadata Address: Not Supported 00:26:51.242 SGL Offset: Supported 00:26:51.242 Transport SGL Data Block: Not Supported 00:26:51.242 Replay Protected Memory Block: Not Supported 00:26:51.242 00:26:51.242 Firmware Slot Information 00:26:51.242 ========================= 00:26:51.242 Active slot: 0 00:26:51.242 00:26:51.242 Asymmetric Namespace Access 00:26:51.242 =========================== 00:26:51.242 Change Count : 0 00:26:51.242 Number of ANA Group Descriptors : 1 00:26:51.242 ANA Group Descriptor : 0 00:26:51.242 ANA Group ID : 1 00:26:51.242 Number of NSID Values : 1 00:26:51.242 Change Count : 0 00:26:51.242 ANA State : 1 00:26:51.242 Namespace Identifier : 1 00:26:51.242 00:26:51.242 Commands Supported and Effects 00:26:51.242 ============================== 00:26:51.242 Admin Commands 00:26:51.242 -------------- 00:26:51.242 Get Log Page (02h): Supported 00:26:51.242 Identify (06h): Supported 00:26:51.242 Abort (08h): Supported 00:26:51.242 Set Features (09h): Supported 00:26:51.242 Get Features (0Ah): Supported 00:26:51.242 Asynchronous Event Request (0Ch): Supported 00:26:51.242 Keep Alive (18h): Supported 00:26:51.242 I/O Commands 00:26:51.242 ------------ 00:26:51.242 Flush (00h): Supported 00:26:51.242 Write (01h): Supported LBA-Change 00:26:51.242 Read (02h): Supported 00:26:51.242 Write Zeroes (08h): Supported LBA-Change 00:26:51.242 Dataset Management (09h): Supported 00:26:51.242 00:26:51.242 Error Log 00:26:51.242 ========= 00:26:51.242 Entry: 0 00:26:51.242 Error Count: 0x3 00:26:51.242 Submission Queue Id: 0x0 00:26:51.242 Command Id: 0x5 00:26:51.242 Phase Bit: 0 00:26:51.242 Status Code: 0x2 00:26:51.242 Status Code Type: 0x0 00:26:51.242 Do Not Retry: 1 00:26:51.242 Error Location: 0x28 00:26:51.242 LBA: 0x0 00:26:51.242 Namespace: 0x0 00:26:51.242 Vendor Log Page: 0x0 00:26:51.242 ----------- 00:26:51.242 Entry: 1 00:26:51.242 Error Count: 0x2 00:26:51.242 Submission Queue Id: 0x0 00:26:51.242 Command Id: 0x5 00:26:51.242 Phase Bit: 0 00:26:51.242 Status Code: 0x2 00:26:51.242 Status Code Type: 0x0 00:26:51.242 Do Not Retry: 1 00:26:51.242 Error Location: 0x28 00:26:51.242 LBA: 0x0 00:26:51.242 Namespace: 0x0 00:26:51.242 Vendor Log Page: 0x0 00:26:51.242 ----------- 00:26:51.242 Entry: 2 00:26:51.242 Error Count: 0x1 00:26:51.242 Submission Queue Id: 0x0 00:26:51.242 Command Id: 0x4 00:26:51.242 Phase Bit: 0 00:26:51.242 Status Code: 0x2 00:26:51.242 Status Code Type: 0x0 00:26:51.242 Do Not Retry: 1 00:26:51.242 Error Location: 0x28 00:26:51.242 LBA: 0x0 00:26:51.242 Namespace: 0x0 00:26:51.242 Vendor Log Page: 0x0 00:26:51.242 00:26:51.242 Number of Queues 00:26:51.242 ================ 00:26:51.242 Number of I/O Submission Queues: 128 00:26:51.242 Number of I/O Completion Queues: 128 00:26:51.242 00:26:51.242 ZNS Specific Controller Data 00:26:51.242 ============================ 00:26:51.242 Zone Append Size Limit: 0 00:26:51.242 00:26:51.242 00:26:51.242 Active Namespaces 00:26:51.242 ================= 00:26:51.242 get_feature(0x05) failed 00:26:51.242 Namespace ID:1 00:26:51.242 Command Set Identifier: NVM (00h) 00:26:51.242 Deallocate: Supported 00:26:51.242 Deallocated/Unwritten Error: Not Supported 00:26:51.242 Deallocated Read Value: Unknown 00:26:51.242 Deallocate in Write Zeroes: Not Supported 00:26:51.242 Deallocated Guard Field: 0xFFFF 00:26:51.242 Flush: Supported 00:26:51.242 Reservation: Not Supported 00:26:51.242 Namespace Sharing Capabilities: Multiple Controllers 00:26:51.242 Size (in LBAs): 1310720 (5GiB) 00:26:51.242 Capacity (in LBAs): 1310720 (5GiB) 00:26:51.242 Utilization (in LBAs): 1310720 (5GiB) 00:26:51.242 UUID: 0efe6c61-5517-4b9d-abfd-a781996df27c 00:26:51.242 Thin Provisioning: Not Supported 00:26:51.242 Per-NS Atomic Units: Yes 00:26:51.242 Atomic Boundary Size (Normal): 0 00:26:51.242 Atomic Boundary Size (PFail): 0 00:26:51.242 Atomic Boundary Offset: 0 00:26:51.242 NGUID/EUI64 Never Reused: No 00:26:51.242 ANA group ID: 1 00:26:51.242 Namespace Write Protected: No 00:26:51.242 Number of LBA Formats: 1 00:26:51.242 Current LBA Format: LBA Format #00 00:26:51.242 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:51.242 00:26:51.242 09:55:14 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:51.242 09:55:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:51.242 09:55:14 -- nvmf/common.sh@117 -- # sync 00:26:51.500 09:55:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.500 09:55:14 -- nvmf/common.sh@120 -- # set +e 00:26:51.500 09:55:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.500 09:55:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.500 rmmod nvme_tcp 00:26:51.500 rmmod nvme_fabrics 00:26:51.500 09:55:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.500 09:55:14 -- nvmf/common.sh@124 -- # set -e 00:26:51.500 09:55:14 -- nvmf/common.sh@125 -- # return 0 00:26:51.500 09:55:14 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:26:51.500 09:55:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:51.500 09:55:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:51.500 09:55:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:51.500 09:55:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.500 09:55:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.500 09:55:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.500 09:55:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.500 09:55:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.500 09:55:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:51.500 09:55:14 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:51.500 09:55:14 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:51.500 09:55:14 -- nvmf/common.sh@675 -- # echo 0 00:26:51.500 09:55:14 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.500 09:55:14 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.500 09:55:14 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:51.500 09:55:14 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.500 09:55:14 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:51.500 09:55:14 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:51.500 09:55:14 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:52.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:52.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:52.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:52.329 00:26:52.329 real 0m2.825s 00:26:52.329 user 0m0.999s 00:26:52.329 sys 0m1.335s 00:26:52.329 09:55:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:52.329 ************************************ 00:26:52.329 END TEST nvmf_identify_kernel_target 00:26:52.329 ************************************ 00:26:52.329 09:55:15 -- common/autotest_common.sh@10 -- # set +x 00:26:52.329 09:55:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:52.329 09:55:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:52.329 09:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:52.329 09:55:15 -- common/autotest_common.sh@10 -- # set +x 00:26:52.613 ************************************ 00:26:52.613 START TEST nvmf_auth 00:26:52.613 ************************************ 00:26:52.613 09:55:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:52.613 * Looking for test storage... 00:26:52.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:52.613 09:55:15 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:52.613 09:55:15 -- nvmf/common.sh@7 -- # uname -s 00:26:52.613 09:55:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.613 09:55:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.613 09:55:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.614 09:55:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.614 09:55:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.614 09:55:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.614 09:55:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.614 09:55:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.614 09:55:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.614 09:55:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.614 09:55:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:52.614 09:55:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:26:52.614 09:55:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.614 09:55:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.614 09:55:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:52.614 09:55:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.614 09:55:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:52.614 09:55:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.614 09:55:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.614 09:55:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.614 09:55:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.614 09:55:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.614 09:55:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.614 09:55:15 -- paths/export.sh@5 -- # export PATH 00:26:52.614 09:55:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.614 09:55:15 -- nvmf/common.sh@47 -- # : 0 00:26:52.614 09:55:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:52.614 09:55:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:52.614 09:55:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.614 09:55:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.614 09:55:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.614 09:55:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:52.614 09:55:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:52.614 09:55:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:52.614 09:55:15 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:52.614 09:55:15 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:52.614 09:55:15 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:52.614 09:55:15 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:52.614 09:55:15 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:52.614 09:55:15 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:52.614 09:55:15 -- host/auth.sh@21 -- # keys=() 00:26:52.614 09:55:15 -- host/auth.sh@77 -- # nvmftestinit 00:26:52.614 09:55:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:52.614 09:55:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.614 09:55:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:52.614 09:55:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:52.614 09:55:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:52.614 09:55:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.614 09:55:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.614 09:55:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.614 09:55:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:52.614 09:55:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:52.614 09:55:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:52.614 09:55:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:52.614 09:55:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:52.614 09:55:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:52.614 09:55:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.614 09:55:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.614 09:55:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:52.614 09:55:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:52.614 09:55:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:52.614 09:55:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:52.614 09:55:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:52.614 09:55:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.614 09:55:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:52.614 09:55:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:52.614 09:55:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:52.614 09:55:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:52.614 09:55:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:52.614 09:55:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:52.614 Cannot find device "nvmf_tgt_br" 00:26:52.614 09:55:15 -- nvmf/common.sh@155 -- # true 00:26:52.614 09:55:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:52.614 Cannot find device "nvmf_tgt_br2" 00:26:52.614 09:55:15 -- nvmf/common.sh@156 -- # true 00:26:52.614 09:55:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:52.614 09:55:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:52.614 Cannot find device "nvmf_tgt_br" 00:26:52.614 09:55:15 -- nvmf/common.sh@158 -- # true 00:26:52.614 09:55:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:52.614 Cannot find device "nvmf_tgt_br2" 00:26:52.614 09:55:15 -- nvmf/common.sh@159 -- # true 00:26:52.614 09:55:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:52.614 09:55:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:52.614 09:55:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:52.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:52.614 09:55:15 -- nvmf/common.sh@162 -- # true 00:26:52.614 09:55:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:52.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:52.614 09:55:15 -- nvmf/common.sh@163 -- # true 00:26:52.614 09:55:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:52.614 09:55:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:52.614 09:55:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:52.614 09:55:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:52.872 09:55:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:52.872 09:55:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:52.872 09:55:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:52.872 09:55:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:52.872 09:55:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:52.872 09:55:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:52.872 09:55:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:52.872 09:55:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:52.872 09:55:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:52.872 09:55:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:52.872 09:55:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:52.872 09:55:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:52.872 09:55:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:52.872 09:55:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:52.872 09:55:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:52.872 09:55:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:52.872 09:55:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:52.872 09:55:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:52.872 09:55:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:52.872 09:55:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:52.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:26:52.872 00:26:52.872 --- 10.0.0.2 ping statistics --- 00:26:52.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.872 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:52.872 09:55:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:52.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:52.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:26:52.872 00:26:52.872 --- 10.0.0.3 ping statistics --- 00:26:52.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.872 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:26:52.872 09:55:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:52.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:26:52.872 00:26:52.872 --- 10.0.0.1 ping statistics --- 00:26:52.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.872 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:52.872 09:55:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.872 09:55:16 -- nvmf/common.sh@422 -- # return 0 00:26:52.872 09:55:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:52.872 09:55:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.872 09:55:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:52.872 09:55:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:52.872 09:55:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.872 09:55:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:52.872 09:55:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:52.872 09:55:16 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:26:52.872 09:55:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:52.872 09:55:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:52.872 09:55:16 -- common/autotest_common.sh@10 -- # set +x 00:26:52.872 09:55:16 -- nvmf/common.sh@470 -- # nvmfpid=77939 00:26:52.872 09:55:16 -- nvmf/common.sh@471 -- # waitforlisten 77939 00:26:52.872 09:55:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:52.872 09:55:16 -- common/autotest_common.sh@817 -- # '[' -z 77939 ']' 00:26:52.872 09:55:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.872 09:55:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:52.872 09:55:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.872 09:55:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:52.872 09:55:16 -- common/autotest_common.sh@10 -- # set +x 00:26:54.245 09:55:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:54.245 09:55:17 -- common/autotest_common.sh@850 -- # return 0 00:26:54.245 09:55:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:54.245 09:55:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:54.245 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.245 09:55:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.245 09:55:17 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:54.245 09:55:17 -- host/auth.sh@81 -- # gen_key null 32 00:26:54.245 09:55:17 -- host/auth.sh@53 -- # local digest len file key 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # local -A digests 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # digest=null 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # len=32 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # key=3c38eab1cb7c82e102d2a3b01df2b883 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Czh 00:26:54.245 09:55:17 -- host/auth.sh@59 -- # format_dhchap_key 3c38eab1cb7c82e102d2a3b01df2b883 0 00:26:54.245 09:55:17 -- nvmf/common.sh@708 -- # format_key DHHC-1 3c38eab1cb7c82e102d2a3b01df2b883 0 00:26:54.245 09:55:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # key=3c38eab1cb7c82e102d2a3b01df2b883 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # digest=0 00:26:54.245 09:55:17 -- nvmf/common.sh@694 -- # python - 00:26:54.245 09:55:17 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Czh 00:26:54.245 09:55:17 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Czh 00:26:54.245 09:55:17 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.Czh 00:26:54.245 09:55:17 -- host/auth.sh@82 -- # gen_key null 48 00:26:54.245 09:55:17 -- host/auth.sh@53 -- # local digest len file key 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # local -A digests 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # digest=null 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # len=48 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # key=279036d5c7acc6187b6223fb707cf9e485d48504c0de3141 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.SiP 00:26:54.245 09:55:17 -- host/auth.sh@59 -- # format_dhchap_key 279036d5c7acc6187b6223fb707cf9e485d48504c0de3141 0 00:26:54.245 09:55:17 -- nvmf/common.sh@708 -- # format_key DHHC-1 279036d5c7acc6187b6223fb707cf9e485d48504c0de3141 0 00:26:54.245 09:55:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # key=279036d5c7acc6187b6223fb707cf9e485d48504c0de3141 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # digest=0 00:26:54.245 09:55:17 -- nvmf/common.sh@694 -- # python - 00:26:54.245 09:55:17 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.SiP 00:26:54.245 09:55:17 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.SiP 00:26:54.245 09:55:17 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.SiP 00:26:54.245 09:55:17 -- host/auth.sh@83 -- # gen_key sha256 32 00:26:54.245 09:55:17 -- host/auth.sh@53 -- # local digest len file key 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # local -A digests 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # digest=sha256 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # len=32 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # key=8f86b4dffde7da9bd202ff66e60902b4 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.0TA 00:26:54.245 09:55:17 -- host/auth.sh@59 -- # format_dhchap_key 8f86b4dffde7da9bd202ff66e60902b4 1 00:26:54.245 09:55:17 -- nvmf/common.sh@708 -- # format_key DHHC-1 8f86b4dffde7da9bd202ff66e60902b4 1 00:26:54.245 09:55:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # key=8f86b4dffde7da9bd202ff66e60902b4 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # digest=1 00:26:54.245 09:55:17 -- nvmf/common.sh@694 -- # python - 00:26:54.245 09:55:17 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.0TA 00:26:54.245 09:55:17 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.0TA 00:26:54.245 09:55:17 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.0TA 00:26:54.245 09:55:17 -- host/auth.sh@84 -- # gen_key sha384 48 00:26:54.245 09:55:17 -- host/auth.sh@53 -- # local digest len file key 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # local -A digests 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # digest=sha384 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # len=48 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # key=8f10a4303b7ad9076cfd94ea5fc4ced08334c3664adbdec6 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.245 09:55:17 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.qG8 00:26:54.245 09:55:17 -- host/auth.sh@59 -- # format_dhchap_key 8f10a4303b7ad9076cfd94ea5fc4ced08334c3664adbdec6 2 00:26:54.245 09:55:17 -- nvmf/common.sh@708 -- # format_key DHHC-1 8f10a4303b7ad9076cfd94ea5fc4ced08334c3664adbdec6 2 00:26:54.245 09:55:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # key=8f10a4303b7ad9076cfd94ea5fc4ced08334c3664adbdec6 00:26:54.245 09:55:17 -- nvmf/common.sh@693 -- # digest=2 00:26:54.245 09:55:17 -- nvmf/common.sh@694 -- # python - 00:26:54.245 09:55:17 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.qG8 00:26:54.245 09:55:17 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.qG8 00:26:54.245 09:55:17 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.qG8 00:26:54.245 09:55:17 -- host/auth.sh@85 -- # gen_key sha512 64 00:26:54.245 09:55:17 -- host/auth.sh@53 -- # local digest len file key 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.245 09:55:17 -- host/auth.sh@54 -- # local -A digests 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # digest=sha512 00:26:54.245 09:55:17 -- host/auth.sh@56 -- # len=64 00:26:54.245 09:55:17 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:54.246 09:55:17 -- host/auth.sh@57 -- # key=f1fc0f09ffe0e69508840839049753d156a7dfd167ef210abd77fc448bfb771e 00:26:54.246 09:55:17 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:26:54.246 09:55:17 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.fGL 00:26:54.246 09:55:17 -- host/auth.sh@59 -- # format_dhchap_key f1fc0f09ffe0e69508840839049753d156a7dfd167ef210abd77fc448bfb771e 3 00:26:54.246 09:55:17 -- nvmf/common.sh@708 -- # format_key DHHC-1 f1fc0f09ffe0e69508840839049753d156a7dfd167ef210abd77fc448bfb771e 3 00:26:54.246 09:55:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:54.246 09:55:17 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:54.246 09:55:17 -- nvmf/common.sh@693 -- # key=f1fc0f09ffe0e69508840839049753d156a7dfd167ef210abd77fc448bfb771e 00:26:54.246 09:55:17 -- nvmf/common.sh@693 -- # digest=3 00:26:54.246 09:55:17 -- nvmf/common.sh@694 -- # python - 00:26:54.246 09:55:17 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.fGL 00:26:54.246 09:55:17 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.fGL 00:26:54.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.246 09:55:17 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.fGL 00:26:54.246 09:55:17 -- host/auth.sh@87 -- # waitforlisten 77939 00:26:54.246 09:55:17 -- common/autotest_common.sh@817 -- # '[' -z 77939 ']' 00:26:54.246 09:55:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.246 09:55:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.246 09:55:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.246 09:55:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.246 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.504 09:55:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:54.504 09:55:17 -- common/autotest_common.sh@850 -- # return 0 00:26:54.504 09:55:17 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:54.504 09:55:17 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Czh 00:26:54.504 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.504 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.504 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.504 09:55:17 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:54.504 09:55:17 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SiP 00:26:54.504 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.504 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.504 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.504 09:55:17 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:54.504 09:55:17 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0TA 00:26:54.504 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.504 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.504 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.504 09:55:17 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:54.504 09:55:17 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qG8 00:26:54.504 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.504 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.504 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.504 09:55:17 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:54.504 09:55:17 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fGL 00:26:54.504 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.504 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.504 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.504 09:55:17 -- host/auth.sh@92 -- # nvmet_auth_init 00:26:54.504 09:55:17 -- host/auth.sh@35 -- # get_main_ns_ip 00:26:54.504 09:55:17 -- nvmf/common.sh@717 -- # local ip 00:26:54.504 09:55:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.504 09:55:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.504 09:55:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.504 09:55:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.504 09:55:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.504 09:55:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.504 09:55:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.773 09:55:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.773 09:55:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.773 09:55:17 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:54.773 09:55:17 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:54.773 09:55:17 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.773 09:55:17 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:54.773 09:55:17 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:54.773 09:55:17 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.773 09:55:17 -- nvmf/common.sh@628 -- # local block nvme 00:26:54.773 09:55:17 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.773 09:55:17 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:54.773 09:55:17 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.773 09:55:17 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:55.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:55.030 Waiting for block devices as requested 00:26:55.030 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:55.030 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:55.597 09:55:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:55.597 09:55:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:55.597 09:55:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:55.597 09:55:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:55.597 09:55:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:55.597 09:55:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:55.597 09:55:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:55.597 09:55:18 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:55.597 09:55:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:55.597 No valid GPT data, bailing 00:26:55.597 09:55:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:55.597 09:55:18 -- scripts/common.sh@391 -- # pt= 00:26:55.597 09:55:18 -- scripts/common.sh@392 -- # return 1 00:26:55.597 09:55:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:55.597 09:55:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:55.597 09:55:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:55.597 09:55:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:26:55.597 09:55:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:55.597 09:55:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:55.597 09:55:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:55.597 09:55:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:26:55.597 09:55:18 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:55.597 09:55:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:55.856 No valid GPT data, bailing 00:26:55.856 09:55:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:55.856 09:55:19 -- scripts/common.sh@391 -- # pt= 00:26:55.856 09:55:19 -- scripts/common.sh@392 -- # return 1 00:26:55.856 09:55:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:26:55.856 09:55:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:55.856 09:55:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:55.856 09:55:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:26:55.856 09:55:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:55.856 09:55:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:55.856 09:55:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:55.856 09:55:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:26:55.856 09:55:19 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:55.856 09:55:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:55.856 No valid GPT data, bailing 00:26:55.856 09:55:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:55.856 09:55:19 -- scripts/common.sh@391 -- # pt= 00:26:55.856 09:55:19 -- scripts/common.sh@392 -- # return 1 00:26:55.856 09:55:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:26:55.856 09:55:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:55.856 09:55:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:55.856 09:55:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:26:55.856 09:55:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:55.856 09:55:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:55.856 09:55:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:55.856 09:55:19 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:26:55.856 09:55:19 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:55.856 09:55:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:55.856 No valid GPT data, bailing 00:26:55.856 09:55:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:55.856 09:55:19 -- scripts/common.sh@391 -- # pt= 00:26:55.856 09:55:19 -- scripts/common.sh@392 -- # return 1 00:26:55.856 09:55:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:26:55.856 09:55:19 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:26:55.856 09:55:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.856 09:55:19 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:55.856 09:55:19 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:55.856 09:55:19 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:55.856 09:55:19 -- nvmf/common.sh@656 -- # echo 1 00:26:55.856 09:55:19 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:26:55.856 09:55:19 -- nvmf/common.sh@658 -- # echo 1 00:26:55.856 09:55:19 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:55.856 09:55:19 -- nvmf/common.sh@661 -- # echo tcp 00:26:55.856 09:55:19 -- nvmf/common.sh@662 -- # echo 4420 00:26:55.856 09:55:19 -- nvmf/common.sh@663 -- # echo ipv4 00:26:55.856 09:55:19 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:55.856 09:55:19 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -a 10.0.0.1 -t tcp -s 4420 00:26:55.856 00:26:55.856 Discovery Log Number of Records 2, Generation counter 2 00:26:55.856 =====Discovery Log Entry 0====== 00:26:55.856 trtype: tcp 00:26:55.856 adrfam: ipv4 00:26:55.856 subtype: current discovery subsystem 00:26:55.856 treq: not specified, sq flow control disable supported 00:26:55.856 portid: 1 00:26:55.856 trsvcid: 4420 00:26:55.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.856 traddr: 10.0.0.1 00:26:55.856 eflags: none 00:26:55.856 sectype: none 00:26:55.856 =====Discovery Log Entry 1====== 00:26:55.856 trtype: tcp 00:26:55.856 adrfam: ipv4 00:26:55.856 subtype: nvme subsystem 00:26:55.856 treq: not specified, sq flow control disable supported 00:26:55.856 portid: 1 00:26:55.856 trsvcid: 4420 00:26:55.856 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:55.856 traddr: 10.0.0.1 00:26:55.856 eflags: none 00:26:55.856 sectype: none 00:26:55.856 09:55:19 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:55.856 09:55:19 -- host/auth.sh@37 -- # echo 0 00:26:55.856 09:55:19 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:55.856 09:55:19 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:55.856 09:55:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:55.856 09:55:19 -- host/auth.sh@44 -- # digest=sha256 00:26:55.856 09:55:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.856 09:55:19 -- host/auth.sh@44 -- # keyid=1 00:26:55.856 09:55:19 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:55.856 09:55:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:55.856 09:55:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:56.114 09:55:19 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:56.114 09:55:19 -- host/auth.sh@100 -- # IFS=, 00:26:56.114 09:55:19 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:26:56.114 09:55:19 -- host/auth.sh@100 -- # IFS=, 00:26:56.114 09:55:19 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:56.114 09:55:19 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:56.114 09:55:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.114 09:55:19 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:26:56.114 09:55:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:56.114 09:55:19 -- host/auth.sh@68 -- # keyid=1 00:26:56.114 09:55:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:56.114 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.114 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.114 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.114 09:55:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.114 09:55:19 -- nvmf/common.sh@717 -- # local ip 00:26:56.114 09:55:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.114 09:55:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.114 09:55:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.114 09:55:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.114 09:55:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.114 09:55:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.114 09:55:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.114 09:55:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.114 09:55:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.115 09:55:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:56.115 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.115 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.115 nvme0n1 00:26:56.115 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.115 09:55:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.115 09:55:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.115 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.115 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.115 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.115 09:55:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.115 09:55:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.115 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.115 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:56.373 09:55:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.373 09:55:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.373 09:55:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:56.373 09:55:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.373 09:55:19 -- host/auth.sh@44 -- # digest=sha256 00:26:56.373 09:55:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.373 09:55:19 -- host/auth.sh@44 -- # keyid=0 00:26:56.373 09:55:19 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:26:56.373 09:55:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:56.373 09:55:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:56.373 09:55:19 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:26:56.373 09:55:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:26:56.373 09:55:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.373 09:55:19 -- host/auth.sh@68 -- # digest=sha256 00:26:56.373 09:55:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:56.373 09:55:19 -- host/auth.sh@68 -- # keyid=0 00:26:56.373 09:55:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.373 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.373 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.373 09:55:19 -- nvmf/common.sh@717 -- # local ip 00:26:56.373 09:55:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.373 09:55:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.373 09:55:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.373 09:55:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.373 09:55:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.373 09:55:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.373 09:55:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.373 09:55:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.373 09:55:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.373 09:55:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:56.373 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.373 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 nvme0n1 00:26:56.373 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.373 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.373 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 09:55:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.373 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.373 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.373 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.373 09:55:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:56.373 09:55:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.373 09:55:19 -- host/auth.sh@44 -- # digest=sha256 00:26:56.373 09:55:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.373 09:55:19 -- host/auth.sh@44 -- # keyid=1 00:26:56.373 09:55:19 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:56.373 09:55:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:56.373 09:55:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:56.373 09:55:19 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:56.373 09:55:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:26:56.373 09:55:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.373 09:55:19 -- host/auth.sh@68 -- # digest=sha256 00:26:56.373 09:55:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:56.373 09:55:19 -- host/auth.sh@68 -- # keyid=1 00:26:56.373 09:55:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.373 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.373 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.373 09:55:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.373 09:55:19 -- nvmf/common.sh@717 -- # local ip 00:26:56.373 09:55:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.373 09:55:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.373 09:55:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.373 09:55:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.373 09:55:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.373 09:55:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.373 09:55:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.373 09:55:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.373 09:55:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.373 09:55:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:56.373 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.373 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.631 nvme0n1 00:26:56.631 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.631 09:55:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.631 09:55:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.631 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.631 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.631 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.631 09:55:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.631 09:55:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.631 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.631 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.631 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.631 09:55:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.631 09:55:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:56.631 09:55:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.631 09:55:19 -- host/auth.sh@44 -- # digest=sha256 00:26:56.631 09:55:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.631 09:55:19 -- host/auth.sh@44 -- # keyid=2 00:26:56.631 09:55:19 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:26:56.631 09:55:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:56.631 09:55:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:56.631 09:55:19 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:26:56.631 09:55:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:26:56.631 09:55:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.631 09:55:19 -- host/auth.sh@68 -- # digest=sha256 00:26:56.631 09:55:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:56.631 09:55:19 -- host/auth.sh@68 -- # keyid=2 00:26:56.631 09:55:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.631 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.631 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.631 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.631 09:55:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.631 09:55:19 -- nvmf/common.sh@717 -- # local ip 00:26:56.631 09:55:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.631 09:55:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.631 09:55:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.631 09:55:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.631 09:55:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.631 09:55:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.631 09:55:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.631 09:55:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.631 09:55:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.631 09:55:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:56.631 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.631 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.631 nvme0n1 00:26:56.631 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.631 09:55:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.631 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.631 09:55:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.631 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:26:56.631 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.889 09:55:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.889 09:55:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.889 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.889 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:56.889 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.889 09:55:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.889 09:55:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:56.890 09:55:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.890 09:55:20 -- host/auth.sh@44 -- # digest=sha256 00:26:56.890 09:55:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.890 09:55:20 -- host/auth.sh@44 -- # keyid=3 00:26:56.890 09:55:20 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:26:56.890 09:55:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:56.890 09:55:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:56.890 09:55:20 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:26:56.890 09:55:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:26:56.890 09:55:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.890 09:55:20 -- host/auth.sh@68 -- # digest=sha256 00:26:56.890 09:55:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:56.890 09:55:20 -- host/auth.sh@68 -- # keyid=3 00:26:56.890 09:55:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.890 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.890 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:56.890 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.890 09:55:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.890 09:55:20 -- nvmf/common.sh@717 -- # local ip 00:26:56.890 09:55:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.890 09:55:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.890 09:55:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.890 09:55:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.890 09:55:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.890 09:55:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.890 09:55:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.890 09:55:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.890 09:55:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.890 09:55:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:56.890 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.890 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:56.890 nvme0n1 00:26:56.890 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.890 09:55:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.890 09:55:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.890 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.890 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:56.890 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.890 09:55:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.890 09:55:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.890 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.890 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:56.890 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.890 09:55:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.890 09:55:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:56.890 09:55:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.890 09:55:20 -- host/auth.sh@44 -- # digest=sha256 00:26:56.890 09:55:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.890 09:55:20 -- host/auth.sh@44 -- # keyid=4 00:26:56.890 09:55:20 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:26:56.890 09:55:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:56.890 09:55:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:56.890 09:55:20 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:26:56.890 09:55:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:26:56.890 09:55:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.890 09:55:20 -- host/auth.sh@68 -- # digest=sha256 00:26:56.890 09:55:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:56.890 09:55:20 -- host/auth.sh@68 -- # keyid=4 00:26:56.890 09:55:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.890 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.890 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:56.890 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.890 09:55:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.890 09:55:20 -- nvmf/common.sh@717 -- # local ip 00:26:56.890 09:55:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.890 09:55:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.890 09:55:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.890 09:55:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.890 09:55:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.890 09:55:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.890 09:55:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.890 09:55:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.890 09:55:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.890 09:55:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.890 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.890 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.148 nvme0n1 00:26:57.148 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.148 09:55:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.148 09:55:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.148 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.148 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.148 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.148 09:55:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.148 09:55:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.148 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.148 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.148 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.148 09:55:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.148 09:55:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.148 09:55:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:57.148 09:55:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.148 09:55:20 -- host/auth.sh@44 -- # digest=sha256 00:26:57.148 09:55:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.148 09:55:20 -- host/auth.sh@44 -- # keyid=0 00:26:57.148 09:55:20 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:26:57.148 09:55:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:57.148 09:55:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:57.406 09:55:20 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:26:57.406 09:55:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:26:57.406 09:55:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.406 09:55:20 -- host/auth.sh@68 -- # digest=sha256 00:26:57.406 09:55:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:57.406 09:55:20 -- host/auth.sh@68 -- # keyid=0 00:26:57.406 09:55:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.406 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.406 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.406 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.406 09:55:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.406 09:55:20 -- nvmf/common.sh@717 -- # local ip 00:26:57.406 09:55:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.406 09:55:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.406 09:55:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.406 09:55:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.406 09:55:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.406 09:55:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.406 09:55:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.406 09:55:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.406 09:55:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.406 09:55:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:57.406 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.406 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.724 nvme0n1 00:26:57.724 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.724 09:55:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.724 09:55:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.724 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.724 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.724 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.724 09:55:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.724 09:55:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.724 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.724 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.724 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.724 09:55:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.724 09:55:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:57.724 09:55:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.724 09:55:20 -- host/auth.sh@44 -- # digest=sha256 00:26:57.724 09:55:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.724 09:55:20 -- host/auth.sh@44 -- # keyid=1 00:26:57.724 09:55:20 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:57.724 09:55:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:57.724 09:55:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:57.724 09:55:20 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:57.724 09:55:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:26:57.724 09:55:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.724 09:55:20 -- host/auth.sh@68 -- # digest=sha256 00:26:57.724 09:55:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:57.724 09:55:20 -- host/auth.sh@68 -- # keyid=1 00:26:57.724 09:55:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.724 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.724 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.724 09:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.724 09:55:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.724 09:55:20 -- nvmf/common.sh@717 -- # local ip 00:26:57.724 09:55:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.724 09:55:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.724 09:55:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.724 09:55:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.724 09:55:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.724 09:55:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.724 09:55:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.724 09:55:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.724 09:55:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.724 09:55:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:57.724 09:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.724 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.724 nvme0n1 00:26:57.724 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.724 09:55:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.724 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.724 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.724 09:55:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.724 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.982 09:55:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:57.982 09:55:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.982 09:55:21 -- host/auth.sh@44 -- # digest=sha256 00:26:57.982 09:55:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.982 09:55:21 -- host/auth.sh@44 -- # keyid=2 00:26:57.982 09:55:21 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:26:57.982 09:55:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:57.982 09:55:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:57.982 09:55:21 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:26:57.982 09:55:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:26:57.982 09:55:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.982 09:55:21 -- host/auth.sh@68 -- # digest=sha256 00:26:57.982 09:55:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:57.982 09:55:21 -- host/auth.sh@68 -- # keyid=2 00:26:57.982 09:55:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.982 09:55:21 -- nvmf/common.sh@717 -- # local ip 00:26:57.982 09:55:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.982 09:55:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.982 09:55:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.982 09:55:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.982 09:55:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.982 09:55:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.982 09:55:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.982 09:55:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.982 09:55:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.982 09:55:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 nvme0n1 00:26:57.982 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 09:55:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.982 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.982 09:55:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:57.982 09:55:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.982 09:55:21 -- host/auth.sh@44 -- # digest=sha256 00:26:57.982 09:55:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.982 09:55:21 -- host/auth.sh@44 -- # keyid=3 00:26:57.982 09:55:21 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:26:57.982 09:55:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:57.982 09:55:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:57.982 09:55:21 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:26:57.982 09:55:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:26:57.982 09:55:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.982 09:55:21 -- host/auth.sh@68 -- # digest=sha256 00:26:57.982 09:55:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:57.982 09:55:21 -- host/auth.sh@68 -- # keyid=3 00:26:57.982 09:55:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.982 09:55:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.982 09:55:21 -- nvmf/common.sh@717 -- # local ip 00:26:57.982 09:55:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.982 09:55:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.982 09:55:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.982 09:55:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.982 09:55:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.982 09:55:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.982 09:55:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.982 09:55:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.982 09:55:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.982 09:55:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:57.982 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.982 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 nvme0n1 00:26:58.241 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.241 09:55:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.241 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.241 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 09:55:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:58.241 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.241 09:55:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.241 09:55:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.241 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.241 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.241 09:55:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:58.241 09:55:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:58.241 09:55:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:58.241 09:55:21 -- host/auth.sh@44 -- # digest=sha256 00:26:58.241 09:55:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.241 09:55:21 -- host/auth.sh@44 -- # keyid=4 00:26:58.241 09:55:21 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:26:58.241 09:55:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:58.241 09:55:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:58.241 09:55:21 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:26:58.241 09:55:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:26:58.241 09:55:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:58.241 09:55:21 -- host/auth.sh@68 -- # digest=sha256 00:26:58.241 09:55:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:58.241 09:55:21 -- host/auth.sh@68 -- # keyid=4 00:26:58.241 09:55:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.241 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.241 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.241 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.241 09:55:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:58.241 09:55:21 -- nvmf/common.sh@717 -- # local ip 00:26:58.241 09:55:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:58.241 09:55:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:58.241 09:55:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.241 09:55:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.241 09:55:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:58.241 09:55:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.241 09:55:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:58.241 09:55:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:58.241 09:55:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:58.241 09:55:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.241 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.241 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 nvme0n1 00:26:58.502 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.502 09:55:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:58.502 09:55:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.502 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.502 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.502 09:55:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.502 09:55:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.502 09:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.502 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 09:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.502 09:55:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.502 09:55:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:58.502 09:55:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:58.502 09:55:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:58.502 09:55:21 -- host/auth.sh@44 -- # digest=sha256 00:26:58.502 09:55:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.502 09:55:21 -- host/auth.sh@44 -- # keyid=0 00:26:58.502 09:55:21 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:26:58.502 09:55:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:58.502 09:55:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:59.067 09:55:22 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:26:59.067 09:55:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:26:59.067 09:55:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.067 09:55:22 -- host/auth.sh@68 -- # digest=sha256 00:26:59.067 09:55:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:59.067 09:55:22 -- host/auth.sh@68 -- # keyid=0 00:26:59.067 09:55:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.067 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.067 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.067 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.067 09:55:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.067 09:55:22 -- nvmf/common.sh@717 -- # local ip 00:26:59.067 09:55:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.067 09:55:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.067 09:55:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.067 09:55:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.067 09:55:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.068 09:55:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.068 09:55:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.068 09:55:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.068 09:55:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.068 09:55:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:59.068 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.068 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.326 nvme0n1 00:26:59.326 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.326 09:55:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.326 09:55:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:59.326 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.326 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.326 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.326 09:55:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.326 09:55:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.326 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.326 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.326 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.326 09:55:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:59.326 09:55:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:59.326 09:55:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:59.326 09:55:22 -- host/auth.sh@44 -- # digest=sha256 00:26:59.326 09:55:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.326 09:55:22 -- host/auth.sh@44 -- # keyid=1 00:26:59.326 09:55:22 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:59.326 09:55:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:59.326 09:55:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:59.326 09:55:22 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:26:59.326 09:55:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:26:59.326 09:55:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.326 09:55:22 -- host/auth.sh@68 -- # digest=sha256 00:26:59.326 09:55:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:59.326 09:55:22 -- host/auth.sh@68 -- # keyid=1 00:26:59.326 09:55:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.326 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.326 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.326 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.326 09:55:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.326 09:55:22 -- nvmf/common.sh@717 -- # local ip 00:26:59.326 09:55:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.326 09:55:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.326 09:55:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.326 09:55:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.326 09:55:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.326 09:55:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.326 09:55:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.326 09:55:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.326 09:55:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.326 09:55:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:59.326 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.326 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 nvme0n1 00:26:59.583 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.583 09:55:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.583 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.583 09:55:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:59.583 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.583 09:55:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.583 09:55:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.583 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.583 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.583 09:55:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:59.584 09:55:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:59.584 09:55:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:59.584 09:55:22 -- host/auth.sh@44 -- # digest=sha256 00:26:59.584 09:55:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.584 09:55:22 -- host/auth.sh@44 -- # keyid=2 00:26:59.584 09:55:22 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:26:59.584 09:55:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:59.584 09:55:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:59.584 09:55:22 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:26:59.584 09:55:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:26:59.584 09:55:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.584 09:55:22 -- host/auth.sh@68 -- # digest=sha256 00:26:59.584 09:55:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:59.584 09:55:22 -- host/auth.sh@68 -- # keyid=2 00:26:59.584 09:55:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.584 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.584 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.584 09:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.584 09:55:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.584 09:55:22 -- nvmf/common.sh@717 -- # local ip 00:26:59.584 09:55:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.584 09:55:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.584 09:55:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.584 09:55:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.584 09:55:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.584 09:55:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.584 09:55:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.584 09:55:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.584 09:55:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.584 09:55:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:59.584 09:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.584 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.842 nvme0n1 00:26:59.842 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.842 09:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.842 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.842 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:26:59.842 09:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:59.842 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.842 09:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.842 09:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.842 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.842 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:26:59.842 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.842 09:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:59.842 09:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:59.842 09:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:59.842 09:55:23 -- host/auth.sh@44 -- # digest=sha256 00:26:59.842 09:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.842 09:55:23 -- host/auth.sh@44 -- # keyid=3 00:26:59.842 09:55:23 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:26:59.842 09:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:59.842 09:55:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:59.842 09:55:23 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:26:59.842 09:55:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:26:59.842 09:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.842 09:55:23 -- host/auth.sh@68 -- # digest=sha256 00:26:59.842 09:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:59.842 09:55:23 -- host/auth.sh@68 -- # keyid=3 00:26:59.842 09:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.842 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.842 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:26:59.842 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.842 09:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.843 09:55:23 -- nvmf/common.sh@717 -- # local ip 00:26:59.843 09:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.843 09:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.843 09:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.843 09:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.843 09:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.843 09:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.843 09:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.843 09:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.843 09:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.843 09:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:59.843 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.843 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.101 nvme0n1 00:27:00.101 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.101 09:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.101 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.101 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.101 09:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:00.101 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.101 09:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.101 09:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.101 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.101 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.101 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.101 09:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:00.101 09:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:00.101 09:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:00.101 09:55:23 -- host/auth.sh@44 -- # digest=sha256 00:27:00.101 09:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.101 09:55:23 -- host/auth.sh@44 -- # keyid=4 00:27:00.101 09:55:23 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:00.101 09:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:00.101 09:55:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:00.101 09:55:23 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:00.101 09:55:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:27:00.101 09:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:00.101 09:55:23 -- host/auth.sh@68 -- # digest=sha256 00:27:00.101 09:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:00.101 09:55:23 -- host/auth.sh@68 -- # keyid=4 00:27:00.101 09:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:00.101 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.101 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.101 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.101 09:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:00.101 09:55:23 -- nvmf/common.sh@717 -- # local ip 00:27:00.101 09:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:00.101 09:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:00.101 09:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.101 09:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.101 09:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:00.101 09:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.101 09:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:00.101 09:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:00.101 09:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:00.101 09:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.101 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.101 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.358 nvme0n1 00:27:00.358 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.358 09:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:00.358 09:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.358 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.358 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.358 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.358 09:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.358 09:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.358 09:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.358 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:00.358 09:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.358 09:55:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.358 09:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:00.358 09:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:00.358 09:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:00.358 09:55:23 -- host/auth.sh@44 -- # digest=sha256 00:27:00.358 09:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.358 09:55:23 -- host/auth.sh@44 -- # keyid=0 00:27:00.358 09:55:23 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:00.358 09:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:00.358 09:55:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:02.266 09:55:25 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:02.266 09:55:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:27:02.266 09:55:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:02.266 09:55:25 -- host/auth.sh@68 -- # digest=sha256 00:27:02.267 09:55:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:02.267 09:55:25 -- host/auth.sh@68 -- # keyid=0 00:27:02.267 09:55:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.267 09:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.267 09:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:02.267 09:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.267 09:55:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:02.267 09:55:25 -- nvmf/common.sh@717 -- # local ip 00:27:02.267 09:55:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:02.267 09:55:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:02.267 09:55:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.267 09:55:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.267 09:55:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:02.267 09:55:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.267 09:55:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:02.267 09:55:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:02.267 09:55:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:02.267 09:55:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:02.267 09:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.267 09:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:02.525 nvme0n1 00:27:02.525 09:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.525 09:55:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.525 09:55:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:02.525 09:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.525 09:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:02.525 09:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.525 09:55:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.525 09:55:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.525 09:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.525 09:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:02.525 09:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.525 09:55:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:02.525 09:55:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:02.525 09:55:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:02.525 09:55:25 -- host/auth.sh@44 -- # digest=sha256 00:27:02.525 09:55:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.525 09:55:25 -- host/auth.sh@44 -- # keyid=1 00:27:02.525 09:55:25 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:02.525 09:55:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:02.525 09:55:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:02.525 09:55:25 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:02.525 09:55:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:27:02.525 09:55:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:02.525 09:55:25 -- host/auth.sh@68 -- # digest=sha256 00:27:02.525 09:55:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:02.525 09:55:25 -- host/auth.sh@68 -- # keyid=1 00:27:02.525 09:55:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.526 09:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.526 09:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:02.526 09:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.526 09:55:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:02.526 09:55:25 -- nvmf/common.sh@717 -- # local ip 00:27:02.526 09:55:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:02.526 09:55:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:02.526 09:55:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.526 09:55:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.526 09:55:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:02.526 09:55:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.526 09:55:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:02.526 09:55:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:02.526 09:55:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:02.526 09:55:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:02.526 09:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.526 09:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:03.094 nvme0n1 00:27:03.094 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.094 09:55:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.094 09:55:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:03.094 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.094 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.094 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.094 09:55:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.094 09:55:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.094 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.094 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.094 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.094 09:55:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:03.094 09:55:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:03.094 09:55:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:03.094 09:55:26 -- host/auth.sh@44 -- # digest=sha256 00:27:03.094 09:55:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.094 09:55:26 -- host/auth.sh@44 -- # keyid=2 00:27:03.094 09:55:26 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:03.094 09:55:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:03.094 09:55:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:03.094 09:55:26 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:03.094 09:55:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:27:03.094 09:55:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:03.094 09:55:26 -- host/auth.sh@68 -- # digest=sha256 00:27:03.094 09:55:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:03.094 09:55:26 -- host/auth.sh@68 -- # keyid=2 00:27:03.094 09:55:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:03.094 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.094 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.094 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.094 09:55:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:03.094 09:55:26 -- nvmf/common.sh@717 -- # local ip 00:27:03.094 09:55:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:03.094 09:55:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:03.094 09:55:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.094 09:55:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.094 09:55:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:03.094 09:55:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.094 09:55:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:03.094 09:55:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:03.094 09:55:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:03.094 09:55:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.094 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.094 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.353 nvme0n1 00:27:03.353 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.353 09:55:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.353 09:55:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:03.353 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.353 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.353 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.353 09:55:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.353 09:55:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.353 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.353 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.611 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.611 09:55:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:03.611 09:55:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:03.611 09:55:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:03.611 09:55:26 -- host/auth.sh@44 -- # digest=sha256 00:27:03.611 09:55:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.611 09:55:26 -- host/auth.sh@44 -- # keyid=3 00:27:03.611 09:55:26 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:03.611 09:55:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:03.612 09:55:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:03.612 09:55:26 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:03.612 09:55:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:27:03.612 09:55:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:03.612 09:55:26 -- host/auth.sh@68 -- # digest=sha256 00:27:03.612 09:55:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:03.612 09:55:26 -- host/auth.sh@68 -- # keyid=3 00:27:03.612 09:55:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:03.612 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.612 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.612 09:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.612 09:55:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:03.612 09:55:26 -- nvmf/common.sh@717 -- # local ip 00:27:03.612 09:55:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:03.612 09:55:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:03.612 09:55:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.612 09:55:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.612 09:55:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:03.612 09:55:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.612 09:55:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:03.612 09:55:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:03.612 09:55:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:03.612 09:55:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:03.612 09:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.612 09:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.870 nvme0n1 00:27:03.870 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.870 09:55:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.870 09:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.870 09:55:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:03.870 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:27:03.870 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.870 09:55:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.870 09:55:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.870 09:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.870 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:27:03.870 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.870 09:55:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:03.870 09:55:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:03.870 09:55:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:03.870 09:55:27 -- host/auth.sh@44 -- # digest=sha256 00:27:03.870 09:55:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.870 09:55:27 -- host/auth.sh@44 -- # keyid=4 00:27:03.870 09:55:27 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:03.870 09:55:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:03.870 09:55:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:03.870 09:55:27 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:03.870 09:55:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:27:03.870 09:55:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:03.870 09:55:27 -- host/auth.sh@68 -- # digest=sha256 00:27:03.870 09:55:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:03.870 09:55:27 -- host/auth.sh@68 -- # keyid=4 00:27:03.870 09:55:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:03.870 09:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.870 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:27:04.128 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.128 09:55:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:04.128 09:55:27 -- nvmf/common.sh@717 -- # local ip 00:27:04.129 09:55:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:04.129 09:55:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:04.129 09:55:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.129 09:55:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.129 09:55:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:04.129 09:55:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.129 09:55:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:04.129 09:55:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:04.129 09:55:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:04.129 09:55:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.129 09:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.129 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:27:04.386 nvme0n1 00:27:04.386 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.386 09:55:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.386 09:55:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:04.386 09:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.386 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:27:04.386 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.386 09:55:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.386 09:55:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.386 09:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.386 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:27:04.386 09:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.386 09:55:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.386 09:55:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:04.386 09:55:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:04.386 09:55:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:04.386 09:55:27 -- host/auth.sh@44 -- # digest=sha256 00:27:04.386 09:55:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.386 09:55:27 -- host/auth.sh@44 -- # keyid=0 00:27:04.387 09:55:27 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:04.387 09:55:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:04.387 09:55:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:08.572 09:55:31 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:08.572 09:55:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:27:08.572 09:55:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:08.572 09:55:31 -- host/auth.sh@68 -- # digest=sha256 00:27:08.572 09:55:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:08.572 09:55:31 -- host/auth.sh@68 -- # keyid=0 00:27:08.572 09:55:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.572 09:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.572 09:55:31 -- common/autotest_common.sh@10 -- # set +x 00:27:08.572 09:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.572 09:55:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:08.572 09:55:31 -- nvmf/common.sh@717 -- # local ip 00:27:08.572 09:55:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:08.572 09:55:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:08.572 09:55:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.572 09:55:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.572 09:55:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:08.572 09:55:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.572 09:55:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:08.572 09:55:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:08.572 09:55:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:08.572 09:55:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:08.572 09:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.572 09:55:31 -- common/autotest_common.sh@10 -- # set +x 00:27:08.831 nvme0n1 00:27:08.831 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.831 09:55:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.831 09:55:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:08.831 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.831 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:08.831 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.089 09:55:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.090 09:55:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.090 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.090 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.090 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.090 09:55:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:09.090 09:55:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:09.090 09:55:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:09.090 09:55:32 -- host/auth.sh@44 -- # digest=sha256 00:27:09.090 09:55:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.090 09:55:32 -- host/auth.sh@44 -- # keyid=1 00:27:09.090 09:55:32 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:09.090 09:55:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:09.090 09:55:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:09.090 09:55:32 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:09.090 09:55:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:27:09.090 09:55:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:09.090 09:55:32 -- host/auth.sh@68 -- # digest=sha256 00:27:09.090 09:55:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:09.090 09:55:32 -- host/auth.sh@68 -- # keyid=1 00:27:09.090 09:55:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.090 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.090 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.090 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.090 09:55:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:09.090 09:55:32 -- nvmf/common.sh@717 -- # local ip 00:27:09.090 09:55:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:09.090 09:55:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:09.090 09:55:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.090 09:55:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.090 09:55:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:09.090 09:55:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.090 09:55:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:09.090 09:55:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:09.090 09:55:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:09.090 09:55:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:09.090 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.090 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.657 nvme0n1 00:27:09.657 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.657 09:55:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.657 09:55:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:09.657 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.657 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.657 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.657 09:55:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.657 09:55:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.657 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.657 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.657 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.657 09:55:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:09.657 09:55:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:09.657 09:55:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:09.657 09:55:32 -- host/auth.sh@44 -- # digest=sha256 00:27:09.657 09:55:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.657 09:55:32 -- host/auth.sh@44 -- # keyid=2 00:27:09.657 09:55:32 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:09.657 09:55:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:09.657 09:55:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:09.657 09:55:32 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:09.657 09:55:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:27:09.657 09:55:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:09.657 09:55:32 -- host/auth.sh@68 -- # digest=sha256 00:27:09.657 09:55:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:09.657 09:55:32 -- host/auth.sh@68 -- # keyid=2 00:27:09.657 09:55:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.657 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.657 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.657 09:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.657 09:55:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:09.657 09:55:32 -- nvmf/common.sh@717 -- # local ip 00:27:09.657 09:55:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:09.657 09:55:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:09.657 09:55:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.657 09:55:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.657 09:55:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:09.657 09:55:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.657 09:55:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:09.657 09:55:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:09.657 09:55:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:09.657 09:55:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:09.657 09:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.657 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:27:10.222 nvme0n1 00:27:10.222 09:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.222 09:55:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.222 09:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.222 09:55:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:10.222 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.481 09:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.481 09:55:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.481 09:55:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.481 09:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.481 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.481 09:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.482 09:55:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:10.482 09:55:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:10.482 09:55:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:10.482 09:55:33 -- host/auth.sh@44 -- # digest=sha256 00:27:10.482 09:55:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.482 09:55:33 -- host/auth.sh@44 -- # keyid=3 00:27:10.482 09:55:33 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:10.482 09:55:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:10.482 09:55:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:10.482 09:55:33 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:10.482 09:55:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:27:10.482 09:55:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:10.482 09:55:33 -- host/auth.sh@68 -- # digest=sha256 00:27:10.482 09:55:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:10.482 09:55:33 -- host/auth.sh@68 -- # keyid=3 00:27:10.482 09:55:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.482 09:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.482 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.482 09:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.482 09:55:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:10.482 09:55:33 -- nvmf/common.sh@717 -- # local ip 00:27:10.482 09:55:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:10.482 09:55:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:10.482 09:55:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.482 09:55:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.482 09:55:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:10.482 09:55:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.482 09:55:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:10.482 09:55:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:10.482 09:55:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:10.482 09:55:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:10.482 09:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.482 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:27:11.049 nvme0n1 00:27:11.049 09:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.049 09:55:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.049 09:55:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:11.049 09:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.049 09:55:34 -- common/autotest_common.sh@10 -- # set +x 00:27:11.049 09:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.049 09:55:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.049 09:55:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.049 09:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.049 09:55:34 -- common/autotest_common.sh@10 -- # set +x 00:27:11.049 09:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.049 09:55:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:11.049 09:55:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:11.049 09:55:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:11.049 09:55:34 -- host/auth.sh@44 -- # digest=sha256 00:27:11.049 09:55:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.049 09:55:34 -- host/auth.sh@44 -- # keyid=4 00:27:11.049 09:55:34 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:11.049 09:55:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:11.050 09:55:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:11.050 09:55:34 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:11.050 09:55:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:27:11.050 09:55:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:11.050 09:55:34 -- host/auth.sh@68 -- # digest=sha256 00:27:11.050 09:55:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:11.050 09:55:34 -- host/auth.sh@68 -- # keyid=4 00:27:11.050 09:55:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:11.050 09:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.050 09:55:34 -- common/autotest_common.sh@10 -- # set +x 00:27:11.050 09:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.050 09:55:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:11.050 09:55:34 -- nvmf/common.sh@717 -- # local ip 00:27:11.050 09:55:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:11.050 09:55:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:11.050 09:55:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.050 09:55:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.050 09:55:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:11.050 09:55:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.050 09:55:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:11.050 09:55:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:11.050 09:55:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:11.050 09:55:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.050 09:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.050 09:55:34 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 nvme0n1 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 09:55:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:11.985 09:55:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.985 09:55:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:11.985 09:55:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:11.985 09:55:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:11.985 09:55:35 -- host/auth.sh@44 -- # digest=sha384 00:27:11.985 09:55:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.985 09:55:35 -- host/auth.sh@44 -- # keyid=0 00:27:11.985 09:55:35 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:11.985 09:55:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:11.985 09:55:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:11.985 09:55:35 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:11.985 09:55:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:27:11.985 09:55:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:11.985 09:55:35 -- host/auth.sh@68 -- # digest=sha384 00:27:11.985 09:55:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:11.985 09:55:35 -- host/auth.sh@68 -- # keyid=0 00:27:11.985 09:55:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:11.985 09:55:35 -- nvmf/common.sh@717 -- # local ip 00:27:11.985 09:55:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:11.985 09:55:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:11.985 09:55:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.985 09:55:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.985 09:55:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:11.985 09:55:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.985 09:55:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:11.985 09:55:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:11.985 09:55:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:11.985 09:55:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 nvme0n1 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 09:55:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:11.985 09:55:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:11.985 09:55:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:11.985 09:55:35 -- host/auth.sh@44 -- # digest=sha384 00:27:11.985 09:55:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.985 09:55:35 -- host/auth.sh@44 -- # keyid=1 00:27:11.985 09:55:35 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:11.985 09:55:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:11.985 09:55:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:11.985 09:55:35 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:11.985 09:55:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:27:11.985 09:55:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:11.985 09:55:35 -- host/auth.sh@68 -- # digest=sha384 00:27:11.985 09:55:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:11.985 09:55:35 -- host/auth.sh@68 -- # keyid=1 00:27:11.985 09:55:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.985 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.985 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.985 09:55:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:11.985 09:55:35 -- nvmf/common.sh@717 -- # local ip 00:27:11.985 09:55:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:11.985 09:55:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:11.985 09:55:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.985 09:55:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.985 09:55:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:11.985 09:55:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.986 09:55:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:11.986 09:55:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:11.986 09:55:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:11.986 09:55:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:11.986 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.986 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:11.986 nvme0n1 00:27:11.986 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.245 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.245 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.245 09:55:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:12.245 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.245 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.245 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.245 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:12.245 09:55:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:12.245 09:55:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:12.245 09:55:35 -- host/auth.sh@44 -- # digest=sha384 00:27:12.245 09:55:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.245 09:55:35 -- host/auth.sh@44 -- # keyid=2 00:27:12.245 09:55:35 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:12.245 09:55:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:12.245 09:55:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:12.245 09:55:35 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:12.245 09:55:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:27:12.245 09:55:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:12.245 09:55:35 -- host/auth.sh@68 -- # digest=sha384 00:27:12.245 09:55:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:12.245 09:55:35 -- host/auth.sh@68 -- # keyid=2 00:27:12.245 09:55:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.245 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.245 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.245 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:12.245 09:55:35 -- nvmf/common.sh@717 -- # local ip 00:27:12.245 09:55:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:12.245 09:55:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:12.245 09:55:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.245 09:55:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.245 09:55:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:12.245 09:55:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.245 09:55:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:12.245 09:55:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:12.245 09:55:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:12.245 09:55:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:12.245 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.245 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.245 nvme0n1 00:27:12.245 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.245 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.245 09:55:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:12.245 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.245 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.245 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.245 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.245 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.245 09:55:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:12.245 09:55:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:12.245 09:55:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:12.245 09:55:35 -- host/auth.sh@44 -- # digest=sha384 00:27:12.245 09:55:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.245 09:55:35 -- host/auth.sh@44 -- # keyid=3 00:27:12.245 09:55:35 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:12.245 09:55:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:12.245 09:55:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:12.246 09:55:35 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:12.246 09:55:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:27:12.246 09:55:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:12.246 09:55:35 -- host/auth.sh@68 -- # digest=sha384 00:27:12.246 09:55:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:12.246 09:55:35 -- host/auth.sh@68 -- # keyid=3 00:27:12.246 09:55:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.246 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.246 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.246 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.504 09:55:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:12.504 09:55:35 -- nvmf/common.sh@717 -- # local ip 00:27:12.504 09:55:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:12.504 09:55:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:12.504 09:55:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.504 09:55:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.504 09:55:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:12.504 09:55:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.504 09:55:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:12.504 09:55:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:12.504 09:55:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:12.504 09:55:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:12.504 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.504 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.504 nvme0n1 00:27:12.504 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.504 09:55:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.504 09:55:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:12.504 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.504 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.504 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.504 09:55:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.504 09:55:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.504 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.504 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.504 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.504 09:55:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:12.504 09:55:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:12.504 09:55:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:12.504 09:55:35 -- host/auth.sh@44 -- # digest=sha384 00:27:12.504 09:55:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.504 09:55:35 -- host/auth.sh@44 -- # keyid=4 00:27:12.504 09:55:35 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:12.504 09:55:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:12.504 09:55:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:12.504 09:55:35 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:12.504 09:55:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:27:12.504 09:55:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:12.504 09:55:35 -- host/auth.sh@68 -- # digest=sha384 00:27:12.504 09:55:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:12.504 09:55:35 -- host/auth.sh@68 -- # keyid=4 00:27:12.504 09:55:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.504 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.504 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.504 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.504 09:55:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:12.504 09:55:35 -- nvmf/common.sh@717 -- # local ip 00:27:12.504 09:55:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:12.504 09:55:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:12.504 09:55:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.504 09:55:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.504 09:55:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:12.504 09:55:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.504 09:55:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:12.504 09:55:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:12.504 09:55:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:12.504 09:55:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.504 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.504 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 nvme0n1 00:27:12.763 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.763 09:55:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.763 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.763 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 09:55:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:12.763 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.763 09:55:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.763 09:55:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.763 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.763 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.763 09:55:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.763 09:55:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:12.763 09:55:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:12.763 09:55:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:12.763 09:55:35 -- host/auth.sh@44 -- # digest=sha384 00:27:12.763 09:55:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.763 09:55:35 -- host/auth.sh@44 -- # keyid=0 00:27:12.763 09:55:35 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:12.763 09:55:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:12.763 09:55:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:12.763 09:55:35 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:12.763 09:55:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:27:12.763 09:55:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:12.763 09:55:35 -- host/auth.sh@68 -- # digest=sha384 00:27:12.763 09:55:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:12.763 09:55:35 -- host/auth.sh@68 -- # keyid=0 00:27:12.763 09:55:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.763 09:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.763 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 09:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.763 09:55:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:12.763 09:55:35 -- nvmf/common.sh@717 -- # local ip 00:27:12.763 09:55:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:12.763 09:55:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:12.763 09:55:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.764 09:55:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.764 09:55:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:12.764 09:55:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.764 09:55:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:12.764 09:55:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:12.764 09:55:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:12.764 09:55:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:12.764 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.764 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:12.764 nvme0n1 00:27:12.764 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.764 09:55:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.764 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.764 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:12.764 09:55:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:12.764 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.023 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:13.023 09:55:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:13.023 09:55:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:13.023 09:55:36 -- host/auth.sh@44 -- # digest=sha384 00:27:13.023 09:55:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.023 09:55:36 -- host/auth.sh@44 -- # keyid=1 00:27:13.023 09:55:36 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:13.023 09:55:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:13.023 09:55:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:13.023 09:55:36 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:13.023 09:55:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:27:13.023 09:55:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:13.023 09:55:36 -- host/auth.sh@68 -- # digest=sha384 00:27:13.023 09:55:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:13.023 09:55:36 -- host/auth.sh@68 -- # keyid=1 00:27:13.023 09:55:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.023 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:13.023 09:55:36 -- nvmf/common.sh@717 -- # local ip 00:27:13.023 09:55:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:13.023 09:55:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:13.023 09:55:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.023 09:55:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.023 09:55:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:13.023 09:55:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.023 09:55:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:13.023 09:55:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:13.023 09:55:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:13.023 09:55:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.023 nvme0n1 00:27:13.023 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.023 09:55:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:13.023 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.023 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:13.023 09:55:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:13.023 09:55:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:13.023 09:55:36 -- host/auth.sh@44 -- # digest=sha384 00:27:13.023 09:55:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.023 09:55:36 -- host/auth.sh@44 -- # keyid=2 00:27:13.023 09:55:36 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:13.023 09:55:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:13.023 09:55:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:13.023 09:55:36 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:13.023 09:55:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:27:13.023 09:55:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:13.023 09:55:36 -- host/auth.sh@68 -- # digest=sha384 00:27:13.023 09:55:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:13.023 09:55:36 -- host/auth.sh@68 -- # keyid=2 00:27:13.023 09:55:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.023 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.023 09:55:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:13.023 09:55:36 -- nvmf/common.sh@717 -- # local ip 00:27:13.023 09:55:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:13.023 09:55:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:13.023 09:55:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.023 09:55:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.023 09:55:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:13.023 09:55:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.023 09:55:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:13.023 09:55:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:13.023 09:55:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:13.023 09:55:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:13.023 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.023 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.289 nvme0n1 00:27:13.289 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.289 09:55:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.289 09:55:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:13.289 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.289 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.289 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.289 09:55:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.289 09:55:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.289 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.289 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.289 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.289 09:55:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:13.289 09:55:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:13.289 09:55:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:13.289 09:55:36 -- host/auth.sh@44 -- # digest=sha384 00:27:13.289 09:55:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.289 09:55:36 -- host/auth.sh@44 -- # keyid=3 00:27:13.289 09:55:36 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:13.289 09:55:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:13.289 09:55:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:13.289 09:55:36 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:13.289 09:55:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:27:13.289 09:55:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:13.289 09:55:36 -- host/auth.sh@68 -- # digest=sha384 00:27:13.289 09:55:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:13.289 09:55:36 -- host/auth.sh@68 -- # keyid=3 00:27:13.290 09:55:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.290 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.290 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.290 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.290 09:55:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:13.290 09:55:36 -- nvmf/common.sh@717 -- # local ip 00:27:13.290 09:55:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:13.290 09:55:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:13.290 09:55:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.290 09:55:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.290 09:55:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:13.290 09:55:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.290 09:55:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:13.290 09:55:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:13.290 09:55:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:13.290 09:55:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:13.290 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.290 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 nvme0n1 00:27:13.549 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.549 09:55:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.549 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.549 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 09:55:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:13.549 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.549 09:55:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.549 09:55:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.549 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.549 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.549 09:55:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:13.549 09:55:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:13.549 09:55:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:13.549 09:55:36 -- host/auth.sh@44 -- # digest=sha384 00:27:13.549 09:55:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.549 09:55:36 -- host/auth.sh@44 -- # keyid=4 00:27:13.549 09:55:36 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:13.549 09:55:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:13.549 09:55:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:13.549 09:55:36 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:13.549 09:55:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:27:13.549 09:55:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:13.549 09:55:36 -- host/auth.sh@68 -- # digest=sha384 00:27:13.549 09:55:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:13.549 09:55:36 -- host/auth.sh@68 -- # keyid=4 00:27:13.549 09:55:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.549 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.549 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.549 09:55:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:13.549 09:55:36 -- nvmf/common.sh@717 -- # local ip 00:27:13.549 09:55:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:13.549 09:55:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:13.549 09:55:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.549 09:55:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.549 09:55:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:13.549 09:55:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.549 09:55:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:13.549 09:55:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:13.549 09:55:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:13.549 09:55:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.549 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.549 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 nvme0n1 00:27:13.549 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.549 09:55:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.549 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.549 09:55:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:13.549 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.807 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.807 09:55:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.807 09:55:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.807 09:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.807 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.807 09:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.807 09:55:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.807 09:55:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:13.807 09:55:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:13.807 09:55:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:13.807 09:55:36 -- host/auth.sh@44 -- # digest=sha384 00:27:13.807 09:55:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.807 09:55:36 -- host/auth.sh@44 -- # keyid=0 00:27:13.807 09:55:36 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:13.807 09:55:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:13.807 09:55:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:13.807 09:55:36 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:13.807 09:55:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:27:13.807 09:55:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:13.807 09:55:36 -- host/auth.sh@68 -- # digest=sha384 00:27:13.807 09:55:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:13.807 09:55:37 -- host/auth.sh@68 -- # keyid=0 00:27:13.807 09:55:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.807 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.807 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:13.807 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.807 09:55:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:13.807 09:55:37 -- nvmf/common.sh@717 -- # local ip 00:27:13.807 09:55:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:13.807 09:55:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:13.807 09:55:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.807 09:55:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.807 09:55:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:13.807 09:55:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.807 09:55:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:13.807 09:55:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:13.807 09:55:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:13.807 09:55:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:13.807 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.807 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:13.807 nvme0n1 00:27:13.807 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.066 09:55:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.066 09:55:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:14.066 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.066 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.066 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.066 09:55:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.066 09:55:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.066 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.066 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.066 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.066 09:55:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:14.066 09:55:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:14.066 09:55:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:14.066 09:55:37 -- host/auth.sh@44 -- # digest=sha384 00:27:14.066 09:55:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.066 09:55:37 -- host/auth.sh@44 -- # keyid=1 00:27:14.066 09:55:37 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:14.066 09:55:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:14.066 09:55:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:14.066 09:55:37 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:14.066 09:55:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:27:14.066 09:55:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:14.066 09:55:37 -- host/auth.sh@68 -- # digest=sha384 00:27:14.066 09:55:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:14.066 09:55:37 -- host/auth.sh@68 -- # keyid=1 00:27:14.066 09:55:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.066 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.066 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.066 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.066 09:55:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:14.066 09:55:37 -- nvmf/common.sh@717 -- # local ip 00:27:14.066 09:55:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:14.066 09:55:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:14.066 09:55:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.066 09:55:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.066 09:55:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:14.066 09:55:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.066 09:55:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:14.066 09:55:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:14.066 09:55:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:14.066 09:55:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:14.066 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.066 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.324 nvme0n1 00:27:14.324 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.324 09:55:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.324 09:55:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:14.324 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.324 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.324 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.324 09:55:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.324 09:55:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.324 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.324 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.324 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.324 09:55:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:14.324 09:55:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:14.324 09:55:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:14.324 09:55:37 -- host/auth.sh@44 -- # digest=sha384 00:27:14.324 09:55:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.324 09:55:37 -- host/auth.sh@44 -- # keyid=2 00:27:14.324 09:55:37 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:14.324 09:55:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:14.324 09:55:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:14.325 09:55:37 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:14.325 09:55:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:27:14.325 09:55:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:14.325 09:55:37 -- host/auth.sh@68 -- # digest=sha384 00:27:14.325 09:55:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:14.325 09:55:37 -- host/auth.sh@68 -- # keyid=2 00:27:14.325 09:55:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.325 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.325 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.325 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.325 09:55:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:14.325 09:55:37 -- nvmf/common.sh@717 -- # local ip 00:27:14.325 09:55:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:14.325 09:55:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:14.325 09:55:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.325 09:55:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.325 09:55:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:14.325 09:55:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.325 09:55:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:14.325 09:55:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:14.325 09:55:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:14.325 09:55:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:14.325 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.325 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.583 nvme0n1 00:27:14.583 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.583 09:55:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.583 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.583 09:55:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:14.583 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.583 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.583 09:55:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.583 09:55:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.583 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.583 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.583 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.583 09:55:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:14.583 09:55:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:14.583 09:55:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:14.583 09:55:37 -- host/auth.sh@44 -- # digest=sha384 00:27:14.583 09:55:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.583 09:55:37 -- host/auth.sh@44 -- # keyid=3 00:27:14.583 09:55:37 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:14.583 09:55:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:14.583 09:55:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:14.583 09:55:37 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:14.583 09:55:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:27:14.583 09:55:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:14.583 09:55:37 -- host/auth.sh@68 -- # digest=sha384 00:27:14.583 09:55:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:14.583 09:55:37 -- host/auth.sh@68 -- # keyid=3 00:27:14.583 09:55:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.583 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.583 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.583 09:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.583 09:55:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:14.583 09:55:37 -- nvmf/common.sh@717 -- # local ip 00:27:14.583 09:55:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:14.583 09:55:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:14.583 09:55:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.583 09:55:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.583 09:55:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:14.583 09:55:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.583 09:55:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:14.583 09:55:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:14.583 09:55:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:14.583 09:55:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:14.583 09:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.583 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:27:14.842 nvme0n1 00:27:14.842 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.842 09:55:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.842 09:55:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:14.842 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.842 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:14.842 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.842 09:55:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.842 09:55:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.842 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.842 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:14.842 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.842 09:55:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:14.842 09:55:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:14.842 09:55:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:14.842 09:55:38 -- host/auth.sh@44 -- # digest=sha384 00:27:14.842 09:55:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.842 09:55:38 -- host/auth.sh@44 -- # keyid=4 00:27:14.842 09:55:38 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:14.842 09:55:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:14.842 09:55:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:14.842 09:55:38 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:14.842 09:55:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:27:14.842 09:55:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:14.842 09:55:38 -- host/auth.sh@68 -- # digest=sha384 00:27:14.842 09:55:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:14.842 09:55:38 -- host/auth.sh@68 -- # keyid=4 00:27:14.842 09:55:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.842 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.842 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:14.842 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.842 09:55:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:14.842 09:55:38 -- nvmf/common.sh@717 -- # local ip 00:27:14.842 09:55:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:14.842 09:55:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:14.842 09:55:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.842 09:55:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.842 09:55:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:14.842 09:55:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.842 09:55:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:14.842 09:55:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:14.842 09:55:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:14.842 09:55:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.842 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.842 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.101 nvme0n1 00:27:15.101 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.101 09:55:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.101 09:55:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:15.101 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.101 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.101 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.101 09:55:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.101 09:55:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.102 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.102 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.102 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.102 09:55:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.102 09:55:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:15.102 09:55:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:15.102 09:55:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:15.102 09:55:38 -- host/auth.sh@44 -- # digest=sha384 00:27:15.102 09:55:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.102 09:55:38 -- host/auth.sh@44 -- # keyid=0 00:27:15.102 09:55:38 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:15.102 09:55:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:15.102 09:55:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:15.102 09:55:38 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:15.102 09:55:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:27:15.102 09:55:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:15.102 09:55:38 -- host/auth.sh@68 -- # digest=sha384 00:27:15.102 09:55:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:15.102 09:55:38 -- host/auth.sh@68 -- # keyid=0 00:27:15.102 09:55:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.102 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.102 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.102 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.102 09:55:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:15.102 09:55:38 -- nvmf/common.sh@717 -- # local ip 00:27:15.102 09:55:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:15.102 09:55:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:15.102 09:55:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.102 09:55:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.102 09:55:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:15.102 09:55:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.102 09:55:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:15.102 09:55:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:15.102 09:55:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:15.102 09:55:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:15.102 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.102 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.360 nvme0n1 00:27:15.360 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.360 09:55:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.360 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.360 09:55:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:15.360 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.360 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.622 09:55:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.622 09:55:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.622 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.622 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.622 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.622 09:55:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:15.622 09:55:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:15.622 09:55:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:15.622 09:55:38 -- host/auth.sh@44 -- # digest=sha384 00:27:15.622 09:55:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.622 09:55:38 -- host/auth.sh@44 -- # keyid=1 00:27:15.622 09:55:38 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:15.622 09:55:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:15.622 09:55:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:15.622 09:55:38 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:15.622 09:55:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:27:15.622 09:55:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:15.622 09:55:38 -- host/auth.sh@68 -- # digest=sha384 00:27:15.622 09:55:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:15.622 09:55:38 -- host/auth.sh@68 -- # keyid=1 00:27:15.622 09:55:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.622 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.622 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.622 09:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.622 09:55:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:15.622 09:55:38 -- nvmf/common.sh@717 -- # local ip 00:27:15.622 09:55:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:15.622 09:55:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:15.622 09:55:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.622 09:55:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.622 09:55:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:15.622 09:55:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.622 09:55:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:15.622 09:55:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:15.622 09:55:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:15.622 09:55:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:15.622 09:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.622 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:27:15.881 nvme0n1 00:27:15.881 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.881 09:55:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.881 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.881 09:55:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:15.881 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:15.881 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.881 09:55:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.881 09:55:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.881 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.881 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:15.881 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.881 09:55:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:15.881 09:55:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:15.881 09:55:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:15.881 09:55:39 -- host/auth.sh@44 -- # digest=sha384 00:27:15.881 09:55:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.881 09:55:39 -- host/auth.sh@44 -- # keyid=2 00:27:15.881 09:55:39 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:15.881 09:55:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:15.881 09:55:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:15.881 09:55:39 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:15.881 09:55:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:27:15.882 09:55:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:15.882 09:55:39 -- host/auth.sh@68 -- # digest=sha384 00:27:15.882 09:55:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:15.882 09:55:39 -- host/auth.sh@68 -- # keyid=2 00:27:15.882 09:55:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.882 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.882 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:15.882 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.882 09:55:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:15.882 09:55:39 -- nvmf/common.sh@717 -- # local ip 00:27:15.882 09:55:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:15.882 09:55:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:15.882 09:55:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.882 09:55:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.882 09:55:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:15.882 09:55:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.882 09:55:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:15.882 09:55:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:15.882 09:55:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:15.882 09:55:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:15.882 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.882 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:16.450 nvme0n1 00:27:16.450 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.450 09:55:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.450 09:55:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:16.450 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.450 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:16.450 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.450 09:55:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.450 09:55:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.450 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.450 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:16.450 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.450 09:55:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:16.450 09:55:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:16.450 09:55:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:16.450 09:55:39 -- host/auth.sh@44 -- # digest=sha384 00:27:16.450 09:55:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.450 09:55:39 -- host/auth.sh@44 -- # keyid=3 00:27:16.450 09:55:39 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:16.450 09:55:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:16.450 09:55:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:16.450 09:55:39 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:16.450 09:55:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:27:16.450 09:55:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:16.450 09:55:39 -- host/auth.sh@68 -- # digest=sha384 00:27:16.450 09:55:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:16.450 09:55:39 -- host/auth.sh@68 -- # keyid=3 00:27:16.450 09:55:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.450 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.450 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:16.450 09:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.450 09:55:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:16.450 09:55:39 -- nvmf/common.sh@717 -- # local ip 00:27:16.450 09:55:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:16.450 09:55:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:16.450 09:55:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.450 09:55:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.450 09:55:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:16.450 09:55:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.450 09:55:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:16.450 09:55:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:16.450 09:55:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:16.450 09:55:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:16.450 09:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.450 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:27:16.710 nvme0n1 00:27:16.710 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.710 09:55:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.710 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.710 09:55:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:16.710 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:16.710 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.968 09:55:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.968 09:55:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.968 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.968 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:16.968 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.968 09:55:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:16.968 09:55:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:16.968 09:55:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:16.968 09:55:40 -- host/auth.sh@44 -- # digest=sha384 00:27:16.968 09:55:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.968 09:55:40 -- host/auth.sh@44 -- # keyid=4 00:27:16.968 09:55:40 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:16.968 09:55:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:16.968 09:55:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:16.968 09:55:40 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:16.968 09:55:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:27:16.968 09:55:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:16.969 09:55:40 -- host/auth.sh@68 -- # digest=sha384 00:27:16.969 09:55:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:16.969 09:55:40 -- host/auth.sh@68 -- # keyid=4 00:27:16.969 09:55:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.969 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.969 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:16.969 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.969 09:55:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:16.969 09:55:40 -- nvmf/common.sh@717 -- # local ip 00:27:16.969 09:55:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:16.969 09:55:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:16.969 09:55:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.969 09:55:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.969 09:55:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:16.969 09:55:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.969 09:55:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:16.969 09:55:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:16.969 09:55:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:16.969 09:55:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.969 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.969 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:17.227 nvme0n1 00:27:17.227 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.227 09:55:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.227 09:55:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:17.227 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.227 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:17.227 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.227 09:55:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.227 09:55:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.227 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.227 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:17.227 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.227 09:55:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.227 09:55:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:17.227 09:55:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:17.227 09:55:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:17.227 09:55:40 -- host/auth.sh@44 -- # digest=sha384 00:27:17.227 09:55:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.227 09:55:40 -- host/auth.sh@44 -- # keyid=0 00:27:17.227 09:55:40 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:17.227 09:55:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:17.227 09:55:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:17.227 09:55:40 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:17.227 09:55:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:27:17.227 09:55:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:17.227 09:55:40 -- host/auth.sh@68 -- # digest=sha384 00:27:17.227 09:55:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:17.227 09:55:40 -- host/auth.sh@68 -- # keyid=0 00:27:17.227 09:55:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:17.227 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.227 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:17.227 09:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.227 09:55:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:17.227 09:55:40 -- nvmf/common.sh@717 -- # local ip 00:27:17.227 09:55:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:17.227 09:55:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:17.227 09:55:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.227 09:55:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.228 09:55:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:17.228 09:55:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.228 09:55:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:17.228 09:55:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:17.228 09:55:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:17.228 09:55:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:17.228 09:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.228 09:55:40 -- common/autotest_common.sh@10 -- # set +x 00:27:18.164 nvme0n1 00:27:18.164 09:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.164 09:55:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.164 09:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.164 09:55:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:18.164 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:27:18.164 09:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.164 09:55:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.164 09:55:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.164 09:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.164 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:27:18.164 09:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.164 09:55:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:18.164 09:55:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:18.164 09:55:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:18.164 09:55:41 -- host/auth.sh@44 -- # digest=sha384 00:27:18.164 09:55:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.164 09:55:41 -- host/auth.sh@44 -- # keyid=1 00:27:18.164 09:55:41 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:18.164 09:55:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:18.164 09:55:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:18.164 09:55:41 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:18.164 09:55:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:27:18.164 09:55:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:18.164 09:55:41 -- host/auth.sh@68 -- # digest=sha384 00:27:18.164 09:55:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:18.164 09:55:41 -- host/auth.sh@68 -- # keyid=1 00:27:18.164 09:55:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.164 09:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.164 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:27:18.164 09:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.164 09:55:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:18.164 09:55:41 -- nvmf/common.sh@717 -- # local ip 00:27:18.164 09:55:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:18.164 09:55:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:18.164 09:55:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.165 09:55:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.165 09:55:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:18.165 09:55:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.165 09:55:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:18.165 09:55:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:18.165 09:55:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:18.165 09:55:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:18.165 09:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.165 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:27:18.732 nvme0n1 00:27:18.732 09:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.732 09:55:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.732 09:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.732 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:27:18.732 09:55:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:18.732 09:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.732 09:55:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.732 09:55:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.732 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.732 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:18.732 09:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.732 09:55:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:18.732 09:55:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:18.732 09:55:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:18.732 09:55:42 -- host/auth.sh@44 -- # digest=sha384 00:27:18.732 09:55:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.732 09:55:42 -- host/auth.sh@44 -- # keyid=2 00:27:18.732 09:55:42 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:18.732 09:55:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:18.732 09:55:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:18.732 09:55:42 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:18.732 09:55:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:27:18.732 09:55:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:18.732 09:55:42 -- host/auth.sh@68 -- # digest=sha384 00:27:18.732 09:55:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:18.732 09:55:42 -- host/auth.sh@68 -- # keyid=2 00:27:18.732 09:55:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.732 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.732 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:18.732 09:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.732 09:55:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:18.732 09:55:42 -- nvmf/common.sh@717 -- # local ip 00:27:18.732 09:55:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:18.732 09:55:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:18.732 09:55:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.732 09:55:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.732 09:55:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:18.732 09:55:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.732 09:55:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:18.732 09:55:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:18.732 09:55:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:18.732 09:55:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:18.732 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.732 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:19.300 nvme0n1 00:27:19.300 09:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.300 09:55:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.300 09:55:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:19.300 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.300 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:19.300 09:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.560 09:55:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.560 09:55:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.560 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.560 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:19.560 09:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.560 09:55:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:19.560 09:55:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:19.560 09:55:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:19.560 09:55:42 -- host/auth.sh@44 -- # digest=sha384 00:27:19.560 09:55:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.560 09:55:42 -- host/auth.sh@44 -- # keyid=3 00:27:19.560 09:55:42 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:19.560 09:55:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:19.560 09:55:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:19.560 09:55:42 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:19.560 09:55:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:27:19.560 09:55:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:19.560 09:55:42 -- host/auth.sh@68 -- # digest=sha384 00:27:19.560 09:55:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:19.560 09:55:42 -- host/auth.sh@68 -- # keyid=3 00:27:19.560 09:55:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.560 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.560 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:19.560 09:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.560 09:55:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:19.560 09:55:42 -- nvmf/common.sh@717 -- # local ip 00:27:19.560 09:55:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:19.560 09:55:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:19.560 09:55:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.560 09:55:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.560 09:55:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:19.560 09:55:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.560 09:55:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:19.560 09:55:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:19.560 09:55:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:19.560 09:55:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:19.560 09:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.560 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:27:20.127 nvme0n1 00:27:20.127 09:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.127 09:55:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.127 09:55:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:20.127 09:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.127 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.127 09:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.127 09:55:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.127 09:55:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.127 09:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.127 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.127 09:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.127 09:55:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:20.127 09:55:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:20.127 09:55:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:20.127 09:55:43 -- host/auth.sh@44 -- # digest=sha384 00:27:20.127 09:55:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.127 09:55:43 -- host/auth.sh@44 -- # keyid=4 00:27:20.127 09:55:43 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:20.127 09:55:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:20.127 09:55:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:20.127 09:55:43 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:20.127 09:55:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:27:20.127 09:55:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:20.127 09:55:43 -- host/auth.sh@68 -- # digest=sha384 00:27:20.127 09:55:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:20.127 09:55:43 -- host/auth.sh@68 -- # keyid=4 00:27:20.127 09:55:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.127 09:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.127 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.127 09:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.127 09:55:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:20.127 09:55:43 -- nvmf/common.sh@717 -- # local ip 00:27:20.127 09:55:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:20.127 09:55:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:20.127 09:55:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.127 09:55:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.127 09:55:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:20.127 09:55:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.127 09:55:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:20.127 09:55:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:20.127 09:55:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:20.127 09:55:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.127 09:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.127 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.694 nvme0n1 00:27:20.694 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.694 09:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.694 09:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:20.694 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.694 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.954 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.954 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:20.954 09:55:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.954 09:55:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:20.954 09:55:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:20.954 09:55:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:20.954 09:55:44 -- host/auth.sh@44 -- # digest=sha512 00:27:20.954 09:55:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.954 09:55:44 -- host/auth.sh@44 -- # keyid=0 00:27:20.954 09:55:44 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:20.954 09:55:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:20.954 09:55:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:20.954 09:55:44 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:20.954 09:55:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:27:20.954 09:55:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:20.954 09:55:44 -- host/auth.sh@68 -- # digest=sha512 00:27:20.954 09:55:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:20.954 09:55:44 -- host/auth.sh@68 -- # keyid=0 00:27:20.954 09:55:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.954 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.954 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:20.954 09:55:44 -- nvmf/common.sh@717 -- # local ip 00:27:20.954 09:55:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:20.954 09:55:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:20.954 09:55:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.954 09:55:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.954 09:55:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:20.954 09:55:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.954 09:55:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:20.954 09:55:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:20.954 09:55:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:20.954 09:55:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:20.954 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.954 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 nvme0n1 00:27:20.954 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.954 09:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:20.954 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.954 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.954 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.954 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.954 09:55:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:20.954 09:55:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:20.954 09:55:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:20.954 09:55:44 -- host/auth.sh@44 -- # digest=sha512 00:27:20.954 09:55:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.954 09:55:44 -- host/auth.sh@44 -- # keyid=1 00:27:20.954 09:55:44 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:20.954 09:55:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:20.954 09:55:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:20.954 09:55:44 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:20.954 09:55:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:27:20.954 09:55:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:20.954 09:55:44 -- host/auth.sh@68 -- # digest=sha512 00:27:20.954 09:55:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:20.954 09:55:44 -- host/auth.sh@68 -- # keyid=1 00:27:20.954 09:55:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.954 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.954 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.213 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.213 09:55:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:21.213 09:55:44 -- nvmf/common.sh@717 -- # local ip 00:27:21.213 09:55:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:21.213 09:55:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:21.213 09:55:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.213 09:55:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.213 09:55:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:21.213 09:55:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.213 09:55:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:21.213 09:55:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:21.213 09:55:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:21.213 09:55:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:21.213 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.213 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.213 nvme0n1 00:27:21.213 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.213 09:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.213 09:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:21.213 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.213 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.213 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.213 09:55:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.213 09:55:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.213 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.213 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.213 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.213 09:55:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:21.213 09:55:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:21.213 09:55:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:21.213 09:55:44 -- host/auth.sh@44 -- # digest=sha512 00:27:21.213 09:55:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.213 09:55:44 -- host/auth.sh@44 -- # keyid=2 00:27:21.213 09:55:44 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:21.213 09:55:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:21.213 09:55:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:21.213 09:55:44 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:21.213 09:55:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:27:21.213 09:55:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:21.213 09:55:44 -- host/auth.sh@68 -- # digest=sha512 00:27:21.213 09:55:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:21.213 09:55:44 -- host/auth.sh@68 -- # keyid=2 00:27:21.213 09:55:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.213 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.213 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.213 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.213 09:55:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:21.214 09:55:44 -- nvmf/common.sh@717 -- # local ip 00:27:21.214 09:55:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:21.214 09:55:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:21.214 09:55:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.214 09:55:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.214 09:55:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:21.214 09:55:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.214 09:55:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:21.214 09:55:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:21.214 09:55:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:21.214 09:55:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:21.214 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.214 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.473 nvme0n1 00:27:21.473 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.473 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.473 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.473 09:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:21.473 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.473 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.473 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.473 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:21.473 09:55:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:21.473 09:55:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:21.473 09:55:44 -- host/auth.sh@44 -- # digest=sha512 00:27:21.473 09:55:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.473 09:55:44 -- host/auth.sh@44 -- # keyid=3 00:27:21.473 09:55:44 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:21.473 09:55:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:21.473 09:55:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:21.473 09:55:44 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:21.473 09:55:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:27:21.473 09:55:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:21.473 09:55:44 -- host/auth.sh@68 -- # digest=sha512 00:27:21.473 09:55:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:21.473 09:55:44 -- host/auth.sh@68 -- # keyid=3 00:27:21.473 09:55:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.473 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.473 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.473 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:21.473 09:55:44 -- nvmf/common.sh@717 -- # local ip 00:27:21.473 09:55:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:21.473 09:55:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:21.473 09:55:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.473 09:55:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.473 09:55:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:21.473 09:55:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.473 09:55:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:21.473 09:55:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:21.473 09:55:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:21.473 09:55:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:21.473 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.473 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.473 nvme0n1 00:27:21.473 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.473 09:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:21.473 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.473 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.473 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.473 09:55:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.473 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.473 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.733 09:55:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:21.733 09:55:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:21.733 09:55:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:21.733 09:55:44 -- host/auth.sh@44 -- # digest=sha512 00:27:21.733 09:55:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.733 09:55:44 -- host/auth.sh@44 -- # keyid=4 00:27:21.733 09:55:44 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:21.733 09:55:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:21.733 09:55:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:21.733 09:55:44 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:21.733 09:55:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:27:21.733 09:55:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:21.733 09:55:44 -- host/auth.sh@68 -- # digest=sha512 00:27:21.733 09:55:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:21.733 09:55:44 -- host/auth.sh@68 -- # keyid=4 00:27:21.733 09:55:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.733 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.733 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.733 09:55:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:21.733 09:55:44 -- nvmf/common.sh@717 -- # local ip 00:27:21.733 09:55:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:21.733 09:55:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:21.733 09:55:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.733 09:55:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.733 09:55:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:21.733 09:55:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.733 09:55:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:21.733 09:55:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:21.733 09:55:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:21.733 09:55:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.733 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.733 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 nvme0n1 00:27:21.733 09:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.733 09:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.733 09:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.733 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 09:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:21.733 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.733 09:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.733 09:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.733 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.733 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.733 09:55:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.733 09:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:21.733 09:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:21.733 09:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:21.733 09:55:45 -- host/auth.sh@44 -- # digest=sha512 00:27:21.733 09:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.733 09:55:45 -- host/auth.sh@44 -- # keyid=0 00:27:21.733 09:55:45 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:21.733 09:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:21.733 09:55:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:21.733 09:55:45 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:21.733 09:55:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:27:21.733 09:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:21.733 09:55:45 -- host/auth.sh@68 -- # digest=sha512 00:27:21.733 09:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:21.733 09:55:45 -- host/auth.sh@68 -- # keyid=0 00:27:21.733 09:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.733 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.733 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.733 09:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:21.733 09:55:45 -- nvmf/common.sh@717 -- # local ip 00:27:21.733 09:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:21.733 09:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:21.733 09:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.733 09:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.733 09:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:21.733 09:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.733 09:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:21.733 09:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:21.733 09:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:21.733 09:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:21.733 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.734 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:21.993 nvme0n1 00:27:21.993 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.993 09:55:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.993 09:55:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:21.993 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.993 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:21.993 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.993 09:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.993 09:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.993 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.993 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:21.993 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.993 09:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:21.993 09:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:21.993 09:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:21.993 09:55:45 -- host/auth.sh@44 -- # digest=sha512 00:27:21.993 09:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.993 09:55:45 -- host/auth.sh@44 -- # keyid=1 00:27:21.993 09:55:45 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:21.993 09:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:21.993 09:55:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:21.993 09:55:45 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:21.993 09:55:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:27:21.993 09:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:21.993 09:55:45 -- host/auth.sh@68 -- # digest=sha512 00:27:21.993 09:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:21.993 09:55:45 -- host/auth.sh@68 -- # keyid=1 00:27:21.993 09:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.993 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.993 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:21.993 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.993 09:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:21.993 09:55:45 -- nvmf/common.sh@717 -- # local ip 00:27:21.993 09:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:21.993 09:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:21.993 09:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.993 09:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.993 09:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:21.993 09:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.993 09:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:21.993 09:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:21.993 09:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:21.993 09:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:21.993 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.993 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 nvme0n1 00:27:22.252 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.252 09:55:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.252 09:55:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:22.252 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.252 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.252 09:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.252 09:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.252 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.252 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.252 09:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:22.252 09:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:22.252 09:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:22.252 09:55:45 -- host/auth.sh@44 -- # digest=sha512 00:27:22.252 09:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.252 09:55:45 -- host/auth.sh@44 -- # keyid=2 00:27:22.252 09:55:45 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:22.252 09:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:22.252 09:55:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:22.252 09:55:45 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:22.252 09:55:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:27:22.252 09:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:22.252 09:55:45 -- host/auth.sh@68 -- # digest=sha512 00:27:22.252 09:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:22.252 09:55:45 -- host/auth.sh@68 -- # keyid=2 00:27:22.252 09:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.252 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.252 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.252 09:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:22.252 09:55:45 -- nvmf/common.sh@717 -- # local ip 00:27:22.252 09:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:22.252 09:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:22.252 09:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.252 09:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.252 09:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:22.252 09:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.252 09:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:22.252 09:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:22.252 09:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:22.252 09:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:22.252 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.252 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.511 nvme0n1 00:27:22.511 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.511 09:55:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.511 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.511 09:55:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:22.511 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.511 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.511 09:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.511 09:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.511 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.511 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.511 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.511 09:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:22.511 09:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:22.511 09:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:22.511 09:55:45 -- host/auth.sh@44 -- # digest=sha512 00:27:22.511 09:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.511 09:55:45 -- host/auth.sh@44 -- # keyid=3 00:27:22.511 09:55:45 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:22.511 09:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:22.511 09:55:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:22.511 09:55:45 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:22.511 09:55:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:27:22.511 09:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:22.511 09:55:45 -- host/auth.sh@68 -- # digest=sha512 00:27:22.511 09:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:22.511 09:55:45 -- host/auth.sh@68 -- # keyid=3 00:27:22.511 09:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.511 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.511 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.511 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.511 09:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:22.511 09:55:45 -- nvmf/common.sh@717 -- # local ip 00:27:22.511 09:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:22.511 09:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:22.511 09:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.511 09:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.511 09:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:22.511 09:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.511 09:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:22.511 09:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:22.511 09:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:22.511 09:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:22.511 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.511 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.511 nvme0n1 00:27:22.511 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.511 09:55:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.511 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.511 09:55:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:22.511 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.511 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.771 09:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.771 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.771 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:22.771 09:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:22.771 09:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:22.771 09:55:45 -- host/auth.sh@44 -- # digest=sha512 00:27:22.771 09:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.771 09:55:45 -- host/auth.sh@44 -- # keyid=4 00:27:22.771 09:55:45 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:22.771 09:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:22.771 09:55:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:22.771 09:55:45 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:22.771 09:55:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:27:22.771 09:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:22.771 09:55:45 -- host/auth.sh@68 -- # digest=sha512 00:27:22.771 09:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:22.771 09:55:45 -- host/auth.sh@68 -- # keyid=4 00:27:22.771 09:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.771 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.771 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 09:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:22.771 09:55:45 -- nvmf/common.sh@717 -- # local ip 00:27:22.771 09:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:22.771 09:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:22.771 09:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.771 09:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.771 09:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:22.771 09:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.771 09:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:22.771 09:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:22.771 09:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:22.771 09:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.771 09:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.771 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 nvme0n1 00:27:22.771 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.771 09:55:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:22.771 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.771 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.771 09:55:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.771 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.771 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.771 09:55:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:22.771 09:55:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:22.771 09:55:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:22.771 09:55:46 -- host/auth.sh@44 -- # digest=sha512 00:27:22.771 09:55:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.771 09:55:46 -- host/auth.sh@44 -- # keyid=0 00:27:22.771 09:55:46 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:22.771 09:55:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:22.771 09:55:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:22.771 09:55:46 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:22.771 09:55:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:27:22.771 09:55:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:22.771 09:55:46 -- host/auth.sh@68 -- # digest=sha512 00:27:22.771 09:55:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:22.771 09:55:46 -- host/auth.sh@68 -- # keyid=0 00:27:22.771 09:55:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:22.771 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.771 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.771 09:55:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:22.771 09:55:46 -- nvmf/common.sh@717 -- # local ip 00:27:22.771 09:55:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:22.771 09:55:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:22.771 09:55:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.771 09:55:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.771 09:55:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:22.771 09:55:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.772 09:55:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:22.772 09:55:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:22.772 09:55:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:22.772 09:55:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:22.772 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.772 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.031 nvme0n1 00:27:23.031 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.031 09:55:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.031 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.031 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.031 09:55:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:23.031 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.031 09:55:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.031 09:55:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.031 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.031 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.031 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.031 09:55:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:23.031 09:55:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:23.031 09:55:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:23.031 09:55:46 -- host/auth.sh@44 -- # digest=sha512 00:27:23.031 09:55:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.031 09:55:46 -- host/auth.sh@44 -- # keyid=1 00:27:23.031 09:55:46 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:23.031 09:55:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:23.031 09:55:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:23.031 09:55:46 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:23.031 09:55:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:27:23.031 09:55:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:23.031 09:55:46 -- host/auth.sh@68 -- # digest=sha512 00:27:23.031 09:55:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:23.031 09:55:46 -- host/auth.sh@68 -- # keyid=1 00:27:23.031 09:55:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.031 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.031 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.290 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.290 09:55:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:23.290 09:55:46 -- nvmf/common.sh@717 -- # local ip 00:27:23.290 09:55:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:23.290 09:55:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:23.290 09:55:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.290 09:55:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.290 09:55:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:23.290 09:55:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.290 09:55:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:23.290 09:55:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:23.290 09:55:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:23.290 09:55:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:23.290 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.290 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.290 nvme0n1 00:27:23.290 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.290 09:55:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.290 09:55:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:23.290 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.290 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.290 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.549 09:55:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.549 09:55:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.550 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.550 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.550 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.550 09:55:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:23.550 09:55:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:23.550 09:55:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:23.550 09:55:46 -- host/auth.sh@44 -- # digest=sha512 00:27:23.550 09:55:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.550 09:55:46 -- host/auth.sh@44 -- # keyid=2 00:27:23.550 09:55:46 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:23.550 09:55:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:23.550 09:55:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:23.550 09:55:46 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:23.550 09:55:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:27:23.550 09:55:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:23.550 09:55:46 -- host/auth.sh@68 -- # digest=sha512 00:27:23.550 09:55:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:23.550 09:55:46 -- host/auth.sh@68 -- # keyid=2 00:27:23.550 09:55:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.550 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.550 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.550 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.550 09:55:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:23.550 09:55:46 -- nvmf/common.sh@717 -- # local ip 00:27:23.550 09:55:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:23.550 09:55:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:23.550 09:55:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.550 09:55:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.550 09:55:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:23.550 09:55:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.550 09:55:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:23.550 09:55:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:23.550 09:55:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:23.550 09:55:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:23.550 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.550 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.550 nvme0n1 00:27:23.550 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.550 09:55:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.550 09:55:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:23.550 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.550 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.550 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.820 09:55:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.820 09:55:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.820 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.820 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.820 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.820 09:55:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:23.820 09:55:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:23.820 09:55:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:23.820 09:55:46 -- host/auth.sh@44 -- # digest=sha512 00:27:23.820 09:55:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.820 09:55:46 -- host/auth.sh@44 -- # keyid=3 00:27:23.820 09:55:46 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:23.820 09:55:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:23.820 09:55:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:23.820 09:55:46 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:23.820 09:55:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:27:23.820 09:55:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:23.820 09:55:46 -- host/auth.sh@68 -- # digest=sha512 00:27:23.820 09:55:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:23.820 09:55:46 -- host/auth.sh@68 -- # keyid=3 00:27:23.820 09:55:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.820 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.820 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.820 09:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.820 09:55:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:23.820 09:55:46 -- nvmf/common.sh@717 -- # local ip 00:27:23.820 09:55:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:23.820 09:55:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:23.820 09:55:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.820 09:55:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.820 09:55:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:23.820 09:55:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.820 09:55:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:23.820 09:55:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:23.820 09:55:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:23.820 09:55:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:23.820 09:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.820 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.820 nvme0n1 00:27:23.820 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.820 09:55:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.820 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.820 09:55:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:23.820 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:23.820 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.080 09:55:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.080 09:55:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.080 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.080 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.080 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.080 09:55:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:24.080 09:55:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:24.080 09:55:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:24.080 09:55:47 -- host/auth.sh@44 -- # digest=sha512 00:27:24.080 09:55:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.080 09:55:47 -- host/auth.sh@44 -- # keyid=4 00:27:24.080 09:55:47 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:24.080 09:55:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:24.080 09:55:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:24.080 09:55:47 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:24.080 09:55:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:27:24.080 09:55:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:24.080 09:55:47 -- host/auth.sh@68 -- # digest=sha512 00:27:24.080 09:55:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:24.080 09:55:47 -- host/auth.sh@68 -- # keyid=4 00:27:24.080 09:55:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.080 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.080 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.080 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.080 09:55:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:24.080 09:55:47 -- nvmf/common.sh@717 -- # local ip 00:27:24.080 09:55:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:24.080 09:55:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:24.080 09:55:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.080 09:55:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.080 09:55:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:24.080 09:55:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.080 09:55:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:24.080 09:55:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:24.080 09:55:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:24.080 09:55:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.080 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.080 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.080 nvme0n1 00:27:24.080 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.080 09:55:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.080 09:55:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:24.080 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.080 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.340 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.340 09:55:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.340 09:55:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.340 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.340 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.340 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.340 09:55:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.340 09:55:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:24.340 09:55:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:24.340 09:55:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:24.340 09:55:47 -- host/auth.sh@44 -- # digest=sha512 00:27:24.340 09:55:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.340 09:55:47 -- host/auth.sh@44 -- # keyid=0 00:27:24.340 09:55:47 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:24.340 09:55:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:24.340 09:55:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:24.340 09:55:47 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:24.340 09:55:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:27:24.340 09:55:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:24.340 09:55:47 -- host/auth.sh@68 -- # digest=sha512 00:27:24.340 09:55:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:24.340 09:55:47 -- host/auth.sh@68 -- # keyid=0 00:27:24.340 09:55:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:24.340 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.340 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.340 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.340 09:55:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:24.340 09:55:47 -- nvmf/common.sh@717 -- # local ip 00:27:24.340 09:55:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:24.340 09:55:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:24.340 09:55:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.340 09:55:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.340 09:55:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:24.340 09:55:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.340 09:55:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:24.340 09:55:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:24.340 09:55:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:24.340 09:55:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:24.340 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.340 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.600 nvme0n1 00:27:24.600 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.600 09:55:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.600 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.600 09:55:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:24.600 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.600 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.600 09:55:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.600 09:55:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.600 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.600 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.600 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.600 09:55:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:24.600 09:55:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:24.600 09:55:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:24.600 09:55:47 -- host/auth.sh@44 -- # digest=sha512 00:27:24.600 09:55:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.600 09:55:47 -- host/auth.sh@44 -- # keyid=1 00:27:24.600 09:55:47 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:24.601 09:55:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:24.601 09:55:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:24.601 09:55:47 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:24.601 09:55:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:27:24.601 09:55:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:24.601 09:55:47 -- host/auth.sh@68 -- # digest=sha512 00:27:24.601 09:55:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:24.601 09:55:47 -- host/auth.sh@68 -- # keyid=1 00:27:24.601 09:55:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:24.601 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.601 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:24.601 09:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.601 09:55:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:24.601 09:55:47 -- nvmf/common.sh@717 -- # local ip 00:27:24.601 09:55:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:24.601 09:55:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:24.601 09:55:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.601 09:55:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.601 09:55:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:24.601 09:55:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.601 09:55:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:24.601 09:55:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:24.601 09:55:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:24.601 09:55:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:24.601 09:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.601 09:55:47 -- common/autotest_common.sh@10 -- # set +x 00:27:25.170 nvme0n1 00:27:25.170 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.170 09:55:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.170 09:55:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:25.170 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.170 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.170 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.170 09:55:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.170 09:55:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.170 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.170 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.170 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.170 09:55:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:25.171 09:55:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:25.171 09:55:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:25.171 09:55:48 -- host/auth.sh@44 -- # digest=sha512 00:27:25.171 09:55:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.171 09:55:48 -- host/auth.sh@44 -- # keyid=2 00:27:25.171 09:55:48 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:25.171 09:55:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:25.171 09:55:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:25.171 09:55:48 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:25.171 09:55:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:27:25.171 09:55:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:25.171 09:55:48 -- host/auth.sh@68 -- # digest=sha512 00:27:25.171 09:55:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:25.171 09:55:48 -- host/auth.sh@68 -- # keyid=2 00:27:25.171 09:55:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.171 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.171 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.171 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.171 09:55:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:25.171 09:55:48 -- nvmf/common.sh@717 -- # local ip 00:27:25.171 09:55:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:25.171 09:55:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:25.171 09:55:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.171 09:55:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.171 09:55:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:25.171 09:55:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.171 09:55:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:25.171 09:55:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:25.171 09:55:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:25.171 09:55:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:25.171 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.171 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.430 nvme0n1 00:27:25.430 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.430 09:55:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.430 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.430 09:55:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:25.430 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.430 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.689 09:55:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.689 09:55:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.689 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.689 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.689 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.689 09:55:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:25.689 09:55:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:25.689 09:55:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:25.689 09:55:48 -- host/auth.sh@44 -- # digest=sha512 00:27:25.689 09:55:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.689 09:55:48 -- host/auth.sh@44 -- # keyid=3 00:27:25.689 09:55:48 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:25.689 09:55:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:25.689 09:55:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:25.689 09:55:48 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:25.689 09:55:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:27:25.689 09:55:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:25.689 09:55:48 -- host/auth.sh@68 -- # digest=sha512 00:27:25.689 09:55:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:25.689 09:55:48 -- host/auth.sh@68 -- # keyid=3 00:27:25.689 09:55:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.689 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.689 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.689 09:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.689 09:55:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:25.690 09:55:48 -- nvmf/common.sh@717 -- # local ip 00:27:25.690 09:55:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:25.690 09:55:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:25.690 09:55:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.690 09:55:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.690 09:55:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:25.690 09:55:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.690 09:55:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:25.690 09:55:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:25.690 09:55:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:25.690 09:55:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:25.690 09:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.690 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:27:25.949 nvme0n1 00:27:25.949 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.949 09:55:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.949 09:55:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:25.949 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.949 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:25.949 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.949 09:55:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.949 09:55:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.949 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.949 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:25.949 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.949 09:55:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:25.949 09:55:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:25.949 09:55:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:25.949 09:55:49 -- host/auth.sh@44 -- # digest=sha512 00:27:25.949 09:55:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.949 09:55:49 -- host/auth.sh@44 -- # keyid=4 00:27:25.949 09:55:49 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:25.949 09:55:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:25.949 09:55:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:25.949 09:55:49 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:25.949 09:55:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:27:25.949 09:55:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:25.949 09:55:49 -- host/auth.sh@68 -- # digest=sha512 00:27:25.949 09:55:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:25.949 09:55:49 -- host/auth.sh@68 -- # keyid=4 00:27:25.949 09:55:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.949 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.949 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.209 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.209 09:55:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:26.209 09:55:49 -- nvmf/common.sh@717 -- # local ip 00:27:26.209 09:55:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:26.209 09:55:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:26.209 09:55:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.209 09:55:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.209 09:55:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:26.209 09:55:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.209 09:55:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:26.209 09:55:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:26.209 09:55:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:26.209 09:55:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.209 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.209 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.468 nvme0n1 00:27:26.468 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.468 09:55:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.468 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.468 09:55:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:26.468 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.468 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.468 09:55:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.468 09:55:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.468 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.468 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.468 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.468 09:55:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.468 09:55:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:26.468 09:55:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:26.468 09:55:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:26.468 09:55:49 -- host/auth.sh@44 -- # digest=sha512 00:27:26.468 09:55:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.468 09:55:49 -- host/auth.sh@44 -- # keyid=0 00:27:26.468 09:55:49 -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:26.468 09:55:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:26.468 09:55:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:26.468 09:55:49 -- host/auth.sh@49 -- # echo DHHC-1:00:M2MzOGVhYjFjYjdjODJlMTAyZDJhM2IwMWRmMmI4ODN6Hpbr: 00:27:26.468 09:55:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:27:26.468 09:55:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:26.468 09:55:49 -- host/auth.sh@68 -- # digest=sha512 00:27:26.468 09:55:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:26.468 09:55:49 -- host/auth.sh@68 -- # keyid=0 00:27:26.468 09:55:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:26.468 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.468 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.468 09:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.468 09:55:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:26.468 09:55:49 -- nvmf/common.sh@717 -- # local ip 00:27:26.468 09:55:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:26.468 09:55:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:26.468 09:55:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.468 09:55:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.468 09:55:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:26.468 09:55:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.468 09:55:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:26.469 09:55:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:26.469 09:55:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:26.469 09:55:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:26.469 09:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.469 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:27:27.051 nvme0n1 00:27:27.051 09:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.051 09:55:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.051 09:55:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:27.051 09:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.051 09:55:50 -- common/autotest_common.sh@10 -- # set +x 00:27:27.311 09:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.311 09:55:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.311 09:55:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.311 09:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.311 09:55:50 -- common/autotest_common.sh@10 -- # set +x 00:27:27.312 09:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.312 09:55:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:27.312 09:55:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:27.312 09:55:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:27.312 09:55:50 -- host/auth.sh@44 -- # digest=sha512 00:27:27.312 09:55:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.312 09:55:50 -- host/auth.sh@44 -- # keyid=1 00:27:27.312 09:55:50 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:27.312 09:55:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:27.312 09:55:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:27.312 09:55:50 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:27.312 09:55:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:27:27.312 09:55:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:27.312 09:55:50 -- host/auth.sh@68 -- # digest=sha512 00:27:27.312 09:55:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:27.312 09:55:50 -- host/auth.sh@68 -- # keyid=1 00:27:27.312 09:55:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.312 09:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.312 09:55:50 -- common/autotest_common.sh@10 -- # set +x 00:27:27.312 09:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.312 09:55:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:27.312 09:55:50 -- nvmf/common.sh@717 -- # local ip 00:27:27.312 09:55:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:27.312 09:55:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:27.312 09:55:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.312 09:55:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.312 09:55:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:27.312 09:55:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.312 09:55:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:27.312 09:55:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:27.312 09:55:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:27.312 09:55:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:27.312 09:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.312 09:55:50 -- common/autotest_common.sh@10 -- # set +x 00:27:27.882 nvme0n1 00:27:27.882 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.882 09:55:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.882 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.882 09:55:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:27.882 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:27.882 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.882 09:55:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.882 09:55:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.882 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.882 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:27.882 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.882 09:55:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:27.882 09:55:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:27.882 09:55:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:27.882 09:55:51 -- host/auth.sh@44 -- # digest=sha512 00:27:27.882 09:55:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.882 09:55:51 -- host/auth.sh@44 -- # keyid=2 00:27:27.882 09:55:51 -- host/auth.sh@45 -- # key=DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:27.882 09:55:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:27.882 09:55:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:27.882 09:55:51 -- host/auth.sh@49 -- # echo DHHC-1:01:OGY4NmI0ZGZmZGU3ZGE5YmQyMDJmZjY2ZTYwOTAyYjTeBEM7: 00:27:27.882 09:55:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:27:27.882 09:55:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:27.882 09:55:51 -- host/auth.sh@68 -- # digest=sha512 00:27:27.882 09:55:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:27.882 09:55:51 -- host/auth.sh@68 -- # keyid=2 00:27:27.882 09:55:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.882 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.882 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:27.882 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.882 09:55:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:27.882 09:55:51 -- nvmf/common.sh@717 -- # local ip 00:27:27.882 09:55:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:27.882 09:55:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:27.882 09:55:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.882 09:55:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.882 09:55:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:27.882 09:55:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.882 09:55:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:27.882 09:55:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:27.882 09:55:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:27.882 09:55:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:27.882 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.882 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:28.830 nvme0n1 00:27:28.830 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.830 09:55:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.830 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.830 09:55:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:28.830 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:28.830 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.830 09:55:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.830 09:55:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.830 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.830 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:28.830 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.830 09:55:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:28.830 09:55:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:28.830 09:55:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:28.830 09:55:51 -- host/auth.sh@44 -- # digest=sha512 00:27:28.830 09:55:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.830 09:55:51 -- host/auth.sh@44 -- # keyid=3 00:27:28.830 09:55:51 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:28.830 09:55:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:28.830 09:55:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:28.830 09:55:51 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxMGE0MzAzYjdhZDkwNzZjZmQ5NGVhNWZjNGNlZDA4MzM0YzM2NjRhZGJkZWM28+CvFA==: 00:27:28.830 09:55:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:27:28.830 09:55:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:28.830 09:55:51 -- host/auth.sh@68 -- # digest=sha512 00:27:28.830 09:55:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:28.830 09:55:51 -- host/auth.sh@68 -- # keyid=3 00:27:28.830 09:55:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:28.830 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.830 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:28.830 09:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.830 09:55:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:28.830 09:55:51 -- nvmf/common.sh@717 -- # local ip 00:27:28.830 09:55:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:28.830 09:55:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:28.830 09:55:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.830 09:55:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.830 09:55:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:28.830 09:55:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.830 09:55:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:28.830 09:55:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:28.830 09:55:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:28.830 09:55:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:28.830 09:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.830 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.399 nvme0n1 00:27:29.399 09:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.399 09:55:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:29.399 09:55:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.399 09:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.399 09:55:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.399 09:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.399 09:55:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.399 09:55:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.399 09:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.399 09:55:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.399 09:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.399 09:55:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:29.399 09:55:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:29.399 09:55:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:29.399 09:55:52 -- host/auth.sh@44 -- # digest=sha512 00:27:29.399 09:55:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.399 09:55:52 -- host/auth.sh@44 -- # keyid=4 00:27:29.399 09:55:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:29.399 09:55:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:29.399 09:55:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:29.399 09:55:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjFmYzBmMDlmZmUwZTY5NTA4ODQwODM5MDQ5NzUzZDE1NmE3ZGZkMTY3ZWYyMTBhYmQ3N2ZjNDQ4YmZiNzcxZYTk7W0=: 00:27:29.399 09:55:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:27:29.399 09:55:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:29.399 09:55:52 -- host/auth.sh@68 -- # digest=sha512 00:27:29.399 09:55:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:29.399 09:55:52 -- host/auth.sh@68 -- # keyid=4 00:27:29.399 09:55:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.399 09:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.399 09:55:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.399 09:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.399 09:55:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:29.399 09:55:52 -- nvmf/common.sh@717 -- # local ip 00:27:29.399 09:55:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:29.400 09:55:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:29.400 09:55:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.400 09:55:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.400 09:55:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:29.400 09:55:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.400 09:55:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:29.400 09:55:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:29.400 09:55:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:29.400 09:55:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.400 09:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.400 09:55:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.967 nvme0n1 00:27:29.967 09:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.967 09:55:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.967 09:55:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:29.967 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.967 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:29.967 09:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.967 09:55:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.967 09:55:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.967 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.967 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.225 09:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.225 09:55:53 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.225 09:55:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:30.225 09:55:53 -- host/auth.sh@44 -- # digest=sha256 00:27:30.225 09:55:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.225 09:55:53 -- host/auth.sh@44 -- # keyid=1 00:27:30.225 09:55:53 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:30.225 09:55:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:30.225 09:55:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:30.225 09:55:53 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjc5MDM2ZDVjN2FjYzYxODdiNjIyM2ZiNzA3Y2Y5ZTQ4NWQ0ODUwNGMwZGUzMTQxPQDxBw==: 00:27:30.225 09:55:53 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.225 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.225 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.225 09:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.225 09:55:53 -- host/auth.sh@119 -- # get_main_ns_ip 00:27:30.225 09:55:53 -- nvmf/common.sh@717 -- # local ip 00:27:30.225 09:55:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:30.225 09:55:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:30.225 09:55:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.225 09:55:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.225 09:55:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:30.225 09:55:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.225 09:55:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:30.225 09:55:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:30.225 09:55:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:30.225 09:55:53 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:30.225 09:55:53 -- common/autotest_common.sh@638 -- # local es=0 00:27:30.225 09:55:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:30.225 09:55:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:30.225 09:55:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:30.225 09:55:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:30.225 09:55:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:30.225 09:55:53 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:30.225 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.225 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.225 request: 00:27:30.225 { 00:27:30.225 "name": "nvme0", 00:27:30.225 "trtype": "tcp", 00:27:30.225 "traddr": "10.0.0.1", 00:27:30.225 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:30.225 "adrfam": "ipv4", 00:27:30.225 "trsvcid": "4420", 00:27:30.225 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:30.225 "method": "bdev_nvme_attach_controller", 00:27:30.225 "req_id": 1 00:27:30.225 } 00:27:30.225 Got JSON-RPC error response 00:27:30.225 response: 00:27:30.225 { 00:27:30.225 "code": -32602, 00:27:30.225 "message": "Invalid parameters" 00:27:30.225 } 00:27:30.225 09:55:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:30.225 09:55:53 -- common/autotest_common.sh@641 -- # es=1 00:27:30.225 09:55:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:30.225 09:55:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:30.225 09:55:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:30.225 09:55:53 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.225 09:55:53 -- host/auth.sh@121 -- # jq length 00:27:30.225 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.225 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.225 09:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.225 09:55:53 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:27:30.225 09:55:53 -- host/auth.sh@124 -- # get_main_ns_ip 00:27:30.225 09:55:53 -- nvmf/common.sh@717 -- # local ip 00:27:30.225 09:55:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:30.225 09:55:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:30.225 09:55:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.225 09:55:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.225 09:55:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:30.225 09:55:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.225 09:55:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:30.225 09:55:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:30.225 09:55:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:30.225 09:55:53 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:30.225 09:55:53 -- common/autotest_common.sh@638 -- # local es=0 00:27:30.225 09:55:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:30.225 09:55:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:30.225 09:55:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:30.225 09:55:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:30.225 09:55:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:30.225 09:55:53 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:30.225 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.225 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.225 request: 00:27:30.225 { 00:27:30.225 "name": "nvme0", 00:27:30.225 "trtype": "tcp", 00:27:30.225 "traddr": "10.0.0.1", 00:27:30.225 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:30.225 "adrfam": "ipv4", 00:27:30.225 "trsvcid": "4420", 00:27:30.225 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:30.225 "dhchap_key": "key2", 00:27:30.225 "method": "bdev_nvme_attach_controller", 00:27:30.225 "req_id": 1 00:27:30.225 } 00:27:30.225 Got JSON-RPC error response 00:27:30.225 response: 00:27:30.225 { 00:27:30.225 "code": -32602, 00:27:30.225 "message": "Invalid parameters" 00:27:30.225 } 00:27:30.225 09:55:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:30.225 09:55:53 -- common/autotest_common.sh@641 -- # es=1 00:27:30.225 09:55:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:30.225 09:55:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:30.225 09:55:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:30.225 09:55:53 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.225 09:55:53 -- host/auth.sh@127 -- # jq length 00:27:30.225 09:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.225 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.225 09:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.225 09:55:53 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:27:30.225 09:55:53 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:30.225 09:55:53 -- host/auth.sh@130 -- # cleanup 00:27:30.225 09:55:53 -- host/auth.sh@24 -- # nvmftestfini 00:27:30.225 09:55:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:30.225 09:55:53 -- nvmf/common.sh@117 -- # sync 00:27:30.225 09:55:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.225 09:55:53 -- nvmf/common.sh@120 -- # set +e 00:27:30.225 09:55:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.225 09:55:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.225 rmmod nvme_tcp 00:27:30.225 rmmod nvme_fabrics 00:27:30.484 09:55:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.484 09:55:53 -- nvmf/common.sh@124 -- # set -e 00:27:30.484 09:55:53 -- nvmf/common.sh@125 -- # return 0 00:27:30.484 09:55:53 -- nvmf/common.sh@478 -- # '[' -n 77939 ']' 00:27:30.484 09:55:53 -- nvmf/common.sh@479 -- # killprocess 77939 00:27:30.484 09:55:53 -- common/autotest_common.sh@936 -- # '[' -z 77939 ']' 00:27:30.484 09:55:53 -- common/autotest_common.sh@940 -- # kill -0 77939 00:27:30.484 09:55:53 -- common/autotest_common.sh@941 -- # uname 00:27:30.484 09:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:30.484 09:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77939 00:27:30.484 09:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:30.484 killing process with pid 77939 00:27:30.484 09:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:30.484 09:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77939' 00:27:30.484 09:55:53 -- common/autotest_common.sh@955 -- # kill 77939 00:27:30.484 09:55:53 -- common/autotest_common.sh@960 -- # wait 77939 00:27:31.420 09:55:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:31.420 09:55:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:31.420 09:55:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:31.420 09:55:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.420 09:55:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.420 09:55:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.420 09:55:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.420 09:55:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.420 09:55:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:31.420 09:55:54 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:31.420 09:55:54 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.420 09:55:54 -- host/auth.sh@27 -- # clean_kernel_target 00:27:31.420 09:55:54 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:31.420 09:55:54 -- nvmf/common.sh@675 -- # echo 0 00:27:31.420 09:55:54 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.420 09:55:54 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:31.420 09:55:54 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:31.420 09:55:54 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.420 09:55:54 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:31.420 09:55:54 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:31.420 09:55:54 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:31.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:32.247 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:32.247 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:32.247 09:55:55 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Czh /tmp/spdk.key-null.SiP /tmp/spdk.key-sha256.0TA /tmp/spdk.key-sha384.qG8 /tmp/spdk.key-sha512.fGL /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:27:32.247 09:55:55 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:32.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:32.815 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:32.815 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:32.815 00:27:32.815 real 0m40.243s 00:27:32.815 user 0m35.790s 00:27:32.815 sys 0m3.866s 00:27:32.815 09:55:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:32.815 09:55:55 -- common/autotest_common.sh@10 -- # set +x 00:27:32.815 ************************************ 00:27:32.815 END TEST nvmf_auth 00:27:32.815 ************************************ 00:27:32.815 09:55:56 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:27:32.815 09:55:56 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:32.815 09:55:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:32.815 09:55:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:32.815 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:27:32.815 ************************************ 00:27:32.815 START TEST nvmf_digest 00:27:32.815 ************************************ 00:27:32.815 09:55:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:32.815 * Looking for test storage... 00:27:32.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:32.815 09:55:56 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:32.815 09:55:56 -- nvmf/common.sh@7 -- # uname -s 00:27:32.815 09:55:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.815 09:55:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.815 09:55:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.815 09:55:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.815 09:55:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.815 09:55:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.815 09:55:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.815 09:55:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.815 09:55:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.815 09:55:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.815 09:55:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:27:32.815 09:55:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:27:32.815 09:55:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.815 09:55:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.815 09:55:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:32.815 09:55:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.815 09:55:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:32.815 09:55:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.815 09:55:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.815 09:55:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.815 09:55:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.815 09:55:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.815 09:55:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.815 09:55:56 -- paths/export.sh@5 -- # export PATH 00:27:32.816 09:55:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.816 09:55:56 -- nvmf/common.sh@47 -- # : 0 00:27:32.816 09:55:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.816 09:55:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.816 09:55:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.816 09:55:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.816 09:55:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.816 09:55:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.816 09:55:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.816 09:55:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.816 09:55:56 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:32.816 09:55:56 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:32.816 09:55:56 -- host/digest.sh@16 -- # runtime=2 00:27:32.816 09:55:56 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:32.816 09:55:56 -- host/digest.sh@138 -- # nvmftestinit 00:27:32.816 09:55:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:32.816 09:55:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.816 09:55:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:32.816 09:55:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:32.816 09:55:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:32.816 09:55:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.816 09:55:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.816 09:55:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.075 09:55:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:33.075 09:55:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:33.075 09:55:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:33.075 09:55:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:33.075 09:55:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:33.075 09:55:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:33.075 09:55:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.075 09:55:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.075 09:55:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:33.075 09:55:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:33.075 09:55:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:33.075 09:55:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:33.075 09:55:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:33.075 09:55:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.075 09:55:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:33.075 09:55:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:33.075 09:55:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:33.075 09:55:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:33.075 09:55:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:33.075 09:55:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:33.075 Cannot find device "nvmf_tgt_br" 00:27:33.075 09:55:56 -- nvmf/common.sh@155 -- # true 00:27:33.075 09:55:56 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:33.075 Cannot find device "nvmf_tgt_br2" 00:27:33.075 09:55:56 -- nvmf/common.sh@156 -- # true 00:27:33.075 09:55:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:33.075 09:55:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:33.075 Cannot find device "nvmf_tgt_br" 00:27:33.075 09:55:56 -- nvmf/common.sh@158 -- # true 00:27:33.075 09:55:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:33.075 Cannot find device "nvmf_tgt_br2" 00:27:33.075 09:55:56 -- nvmf/common.sh@159 -- # true 00:27:33.075 09:55:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:33.075 09:55:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:33.075 09:55:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:33.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:33.075 09:55:56 -- nvmf/common.sh@162 -- # true 00:27:33.075 09:55:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:33.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:33.075 09:55:56 -- nvmf/common.sh@163 -- # true 00:27:33.075 09:55:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:33.075 09:55:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:33.075 09:55:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:33.075 09:55:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:33.075 09:55:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:33.075 09:55:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:33.075 09:55:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:33.075 09:55:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:33.075 09:55:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:33.334 09:55:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:33.334 09:55:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:33.334 09:55:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:33.334 09:55:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:33.334 09:55:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:33.334 09:55:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:33.334 09:55:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:33.334 09:55:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:33.334 09:55:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:33.334 09:55:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:33.334 09:55:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:33.334 09:55:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:33.334 09:55:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:33.334 09:55:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:33.334 09:55:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:33.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:27:33.334 00:27:33.334 --- 10.0.0.2 ping statistics --- 00:27:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.334 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:27:33.334 09:55:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:33.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:33.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:27:33.334 00:27:33.334 --- 10.0.0.3 ping statistics --- 00:27:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.334 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:33.334 09:55:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:33.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:27:33.334 00:27:33.334 --- 10.0.0.1 ping statistics --- 00:27:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.334 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:27:33.334 09:55:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.334 09:55:56 -- nvmf/common.sh@422 -- # return 0 00:27:33.334 09:55:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:33.334 09:55:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.334 09:55:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:33.334 09:55:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:33.334 09:55:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.334 09:55:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:33.334 09:55:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:33.334 09:55:56 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:33.334 09:55:56 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:33.334 09:55:56 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:33.334 09:55:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:33.334 09:55:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:33.334 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:27:33.334 ************************************ 00:27:33.334 START TEST nvmf_digest_clean 00:27:33.334 ************************************ 00:27:33.334 09:55:56 -- common/autotest_common.sh@1111 -- # run_digest 00:27:33.334 09:55:56 -- host/digest.sh@120 -- # local dsa_initiator 00:27:33.334 09:55:56 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:33.334 09:55:56 -- host/digest.sh@121 -- # dsa_initiator=false 00:27:33.334 09:55:56 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:33.334 09:55:56 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:33.334 09:55:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:33.334 09:55:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:33.334 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:27:33.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.334 09:55:56 -- nvmf/common.sh@470 -- # nvmfpid=79556 00:27:33.334 09:55:56 -- nvmf/common.sh@471 -- # waitforlisten 79556 00:27:33.334 09:55:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:33.334 09:55:56 -- common/autotest_common.sh@817 -- # '[' -z 79556 ']' 00:27:33.334 09:55:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.334 09:55:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:33.334 09:55:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.334 09:55:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:33.334 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:27:33.593 [2024-04-18 09:55:56.790814] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:33.593 [2024-04-18 09:55:56.791003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.593 [2024-04-18 09:55:56.971511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.160 [2024-04-18 09:55:57.259978] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.160 [2024-04-18 09:55:57.260056] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.160 [2024-04-18 09:55:57.260082] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.160 [2024-04-18 09:55:57.260117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.160 [2024-04-18 09:55:57.260162] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.160 [2024-04-18 09:55:57.260209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.418 09:55:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:34.418 09:55:57 -- common/autotest_common.sh@850 -- # return 0 00:27:34.418 09:55:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:34.418 09:55:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:34.418 09:55:57 -- common/autotest_common.sh@10 -- # set +x 00:27:34.418 09:55:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.418 09:55:57 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:34.418 09:55:57 -- host/digest.sh@126 -- # common_target_config 00:27:34.418 09:55:57 -- host/digest.sh@43 -- # rpc_cmd 00:27:34.418 09:55:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.418 09:55:57 -- common/autotest_common.sh@10 -- # set +x 00:27:34.985 null0 00:27:34.985 [2024-04-18 09:55:58.126542] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.985 [2024-04-18 09:55:58.150711] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.985 09:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.985 09:55:58 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:34.985 09:55:58 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:34.985 09:55:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:34.986 09:55:58 -- host/digest.sh@80 -- # rw=randread 00:27:34.986 09:55:58 -- host/digest.sh@80 -- # bs=4096 00:27:34.986 09:55:58 -- host/digest.sh@80 -- # qd=128 00:27:34.986 09:55:58 -- host/digest.sh@80 -- # scan_dsa=false 00:27:34.986 09:55:58 -- host/digest.sh@83 -- # bperfpid=79589 00:27:34.986 09:55:58 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:34.986 09:55:58 -- host/digest.sh@84 -- # waitforlisten 79589 /var/tmp/bperf.sock 00:27:34.986 09:55:58 -- common/autotest_common.sh@817 -- # '[' -z 79589 ']' 00:27:34.986 09:55:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.986 09:55:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.986 09:55:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.986 09:55:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.986 09:55:58 -- common/autotest_common.sh@10 -- # set +x 00:27:34.986 [2024-04-18 09:55:58.260614] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:34.986 [2024-04-18 09:55:58.261014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79589 ] 00:27:35.245 [2024-04-18 09:55:58.439003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.504 [2024-04-18 09:55:58.742300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.072 09:55:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:36.072 09:55:59 -- common/autotest_common.sh@850 -- # return 0 00:27:36.072 09:55:59 -- host/digest.sh@86 -- # false 00:27:36.072 09:55:59 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:36.072 09:55:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:36.640 09:55:59 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.640 09:55:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.898 nvme0n1 00:27:36.898 09:56:00 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:36.898 09:56:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.157 Running I/O for 2 seconds... 00:27:39.112 00:27:39.112 Latency(us) 00:27:39.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.112 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:39.112 nvme0n1 : 2.01 10916.77 42.64 0.00 0.00 11715.17 10366.60 24903.68 00:27:39.112 =================================================================================================================== 00:27:39.112 Total : 10916.77 42.64 0.00 0.00 11715.17 10366.60 24903.68 00:27:39.112 0 00:27:39.112 09:56:02 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:39.112 09:56:02 -- host/digest.sh@93 -- # get_accel_stats 00:27:39.112 09:56:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:39.112 09:56:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:39.112 | select(.opcode=="crc32c") 00:27:39.112 | "\(.module_name) \(.executed)"' 00:27:39.112 09:56:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:39.370 09:56:02 -- host/digest.sh@94 -- # false 00:27:39.370 09:56:02 -- host/digest.sh@94 -- # exp_module=software 00:27:39.370 09:56:02 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:39.370 09:56:02 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:39.370 09:56:02 -- host/digest.sh@98 -- # killprocess 79589 00:27:39.370 09:56:02 -- common/autotest_common.sh@936 -- # '[' -z 79589 ']' 00:27:39.370 09:56:02 -- common/autotest_common.sh@940 -- # kill -0 79589 00:27:39.370 09:56:02 -- common/autotest_common.sh@941 -- # uname 00:27:39.370 09:56:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:39.371 09:56:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79589 00:27:39.371 killing process with pid 79589 00:27:39.371 Received shutdown signal, test time was about 2.000000 seconds 00:27:39.371 00:27:39.371 Latency(us) 00:27:39.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.371 =================================================================================================================== 00:27:39.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.371 09:56:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:39.371 09:56:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:39.371 09:56:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79589' 00:27:39.371 09:56:02 -- common/autotest_common.sh@955 -- # kill 79589 00:27:39.371 09:56:02 -- common/autotest_common.sh@960 -- # wait 79589 00:27:40.747 09:56:03 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:40.747 09:56:03 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:40.747 09:56:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:40.747 09:56:03 -- host/digest.sh@80 -- # rw=randread 00:27:40.747 09:56:03 -- host/digest.sh@80 -- # bs=131072 00:27:40.747 09:56:03 -- host/digest.sh@80 -- # qd=16 00:27:40.747 09:56:03 -- host/digest.sh@80 -- # scan_dsa=false 00:27:40.747 09:56:03 -- host/digest.sh@83 -- # bperfpid=79662 00:27:40.747 09:56:03 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:40.747 09:56:03 -- host/digest.sh@84 -- # waitforlisten 79662 /var/tmp/bperf.sock 00:27:40.747 09:56:03 -- common/autotest_common.sh@817 -- # '[' -z 79662 ']' 00:27:40.747 09:56:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.747 09:56:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:40.747 09:56:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.747 09:56:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:40.747 09:56:03 -- common/autotest_common.sh@10 -- # set +x 00:27:40.747 [2024-04-18 09:56:03.852339] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:40.747 [2024-04-18 09:56:03.852743] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79662 ] 00:27:40.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.747 Zero copy mechanism will not be used. 00:27:40.747 [2024-04-18 09:56:04.026302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.005 [2024-04-18 09:56:04.269908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.572 09:56:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:41.572 09:56:04 -- common/autotest_common.sh@850 -- # return 0 00:27:41.572 09:56:04 -- host/digest.sh@86 -- # false 00:27:41.572 09:56:04 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:41.572 09:56:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:42.161 09:56:05 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.161 09:56:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.419 nvme0n1 00:27:42.419 09:56:05 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:42.419 09:56:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.677 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:42.677 Zero copy mechanism will not be used. 00:27:42.677 Running I/O for 2 seconds... 00:27:44.578 00:27:44.578 Latency(us) 00:27:44.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.578 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:44.578 nvme0n1 : 2.00 5947.45 743.43 0.00 0.00 2686.12 2472.49 6940.86 00:27:44.578 =================================================================================================================== 00:27:44.578 Total : 5947.45 743.43 0.00 0.00 2686.12 2472.49 6940.86 00:27:44.578 0 00:27:44.579 09:56:07 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:44.579 09:56:07 -- host/digest.sh@93 -- # get_accel_stats 00:27:44.579 09:56:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:44.579 09:56:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:44.579 09:56:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:44.579 | select(.opcode=="crc32c") 00:27:44.579 | "\(.module_name) \(.executed)"' 00:27:44.838 09:56:08 -- host/digest.sh@94 -- # false 00:27:44.838 09:56:08 -- host/digest.sh@94 -- # exp_module=software 00:27:44.838 09:56:08 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:44.838 09:56:08 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:44.838 09:56:08 -- host/digest.sh@98 -- # killprocess 79662 00:27:44.838 09:56:08 -- common/autotest_common.sh@936 -- # '[' -z 79662 ']' 00:27:44.838 09:56:08 -- common/autotest_common.sh@940 -- # kill -0 79662 00:27:44.838 09:56:08 -- common/autotest_common.sh@941 -- # uname 00:27:44.838 09:56:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:44.838 09:56:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79662 00:27:44.838 killing process with pid 79662 00:27:44.838 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.838 00:27:44.838 Latency(us) 00:27:44.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.838 =================================================================================================================== 00:27:44.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.838 09:56:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:44.838 09:56:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:44.838 09:56:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79662' 00:27:44.838 09:56:08 -- common/autotest_common.sh@955 -- # kill 79662 00:27:44.838 09:56:08 -- common/autotest_common.sh@960 -- # wait 79662 00:27:46.214 09:56:09 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:46.214 09:56:09 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:46.214 09:56:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:46.214 09:56:09 -- host/digest.sh@80 -- # rw=randwrite 00:27:46.214 09:56:09 -- host/digest.sh@80 -- # bs=4096 00:27:46.214 09:56:09 -- host/digest.sh@80 -- # qd=128 00:27:46.214 09:56:09 -- host/digest.sh@80 -- # scan_dsa=false 00:27:46.214 09:56:09 -- host/digest.sh@83 -- # bperfpid=79735 00:27:46.214 09:56:09 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:46.214 09:56:09 -- host/digest.sh@84 -- # waitforlisten 79735 /var/tmp/bperf.sock 00:27:46.214 09:56:09 -- common/autotest_common.sh@817 -- # '[' -z 79735 ']' 00:27:46.214 09:56:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:46.214 09:56:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:46.214 09:56:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:46.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:46.214 09:56:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:46.214 09:56:09 -- common/autotest_common.sh@10 -- # set +x 00:27:46.214 [2024-04-18 09:56:09.471560] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:46.214 [2024-04-18 09:56:09.471861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79735 ] 00:27:46.473 [2024-04-18 09:56:09.638810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.731 [2024-04-18 09:56:09.912765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.989 09:56:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:46.989 09:56:10 -- common/autotest_common.sh@850 -- # return 0 00:27:46.989 09:56:10 -- host/digest.sh@86 -- # false 00:27:46.989 09:56:10 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:46.989 09:56:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:47.924 09:56:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.924 09:56:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.924 nvme0n1 00:27:47.924 09:56:11 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:47.924 09:56:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:48.182 Running I/O for 2 seconds... 00:27:50.084 00:27:50.084 Latency(us) 00:27:50.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.084 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.084 nvme0n1 : 2.01 11395.57 44.51 0.00 0.00 11219.22 9472.93 23235.49 00:27:50.084 =================================================================================================================== 00:27:50.084 Total : 11395.57 44.51 0.00 0.00 11219.22 9472.93 23235.49 00:27:50.084 0 00:27:50.084 09:56:13 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:50.084 09:56:13 -- host/digest.sh@93 -- # get_accel_stats 00:27:50.084 09:56:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:50.084 09:56:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:50.084 | select(.opcode=="crc32c") 00:27:50.084 | "\(.module_name) \(.executed)"' 00:27:50.084 09:56:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:50.650 09:56:13 -- host/digest.sh@94 -- # false 00:27:50.650 09:56:13 -- host/digest.sh@94 -- # exp_module=software 00:27:50.650 09:56:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:50.650 09:56:13 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:50.650 09:56:13 -- host/digest.sh@98 -- # killprocess 79735 00:27:50.650 09:56:13 -- common/autotest_common.sh@936 -- # '[' -z 79735 ']' 00:27:50.650 09:56:13 -- common/autotest_common.sh@940 -- # kill -0 79735 00:27:50.650 09:56:13 -- common/autotest_common.sh@941 -- # uname 00:27:50.650 09:56:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:50.650 09:56:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79735 00:27:50.650 killing process with pid 79735 00:27:50.650 09:56:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:50.650 09:56:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:50.650 09:56:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79735' 00:27:50.650 09:56:13 -- common/autotest_common.sh@955 -- # kill 79735 00:27:50.650 Received shutdown signal, test time was about 2.000000 seconds 00:27:50.650 00:27:50.650 Latency(us) 00:27:50.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.650 =================================================================================================================== 00:27:50.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.650 09:56:13 -- common/autotest_common.sh@960 -- # wait 79735 00:27:51.586 09:56:14 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:51.586 09:56:14 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:51.586 09:56:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:51.586 09:56:14 -- host/digest.sh@80 -- # rw=randwrite 00:27:51.586 09:56:14 -- host/digest.sh@80 -- # bs=131072 00:27:51.586 09:56:14 -- host/digest.sh@80 -- # qd=16 00:27:51.586 09:56:14 -- host/digest.sh@80 -- # scan_dsa=false 00:27:51.586 09:56:14 -- host/digest.sh@83 -- # bperfpid=79808 00:27:51.586 09:56:14 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:51.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.586 09:56:14 -- host/digest.sh@84 -- # waitforlisten 79808 /var/tmp/bperf.sock 00:27:51.586 09:56:14 -- common/autotest_common.sh@817 -- # '[' -z 79808 ']' 00:27:51.586 09:56:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.586 09:56:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:51.586 09:56:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.586 09:56:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:51.586 09:56:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.586 [2024-04-18 09:56:14.934635] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:51.586 [2024-04-18 09:56:14.935064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79808 ] 00:27:51.586 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:51.586 Zero copy mechanism will not be used. 00:27:51.844 [2024-04-18 09:56:15.102193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.103 [2024-04-18 09:56:15.406755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.669 09:56:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:52.669 09:56:15 -- common/autotest_common.sh@850 -- # return 0 00:27:52.669 09:56:15 -- host/digest.sh@86 -- # false 00:27:52.669 09:56:15 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.669 09:56:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.236 09:56:16 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.236 09:56:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.803 nvme0n1 00:27:53.803 09:56:16 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.803 09:56:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:53.803 Zero copy mechanism will not be used. 00:27:53.803 Running I/O for 2 seconds... 00:27:55.705 00:27:55.705 Latency(us) 00:27:55.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.705 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:55.705 nvme0n1 : 2.00 4690.28 586.29 0.00 0.00 3402.52 1846.92 5153.51 00:27:55.705 =================================================================================================================== 00:27:55.705 Total : 4690.28 586.29 0.00 0.00 3402.52 1846.92 5153.51 00:27:55.705 0 00:27:55.964 09:56:19 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:55.964 09:56:19 -- host/digest.sh@93 -- # get_accel_stats 00:27:55.964 09:56:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:55.964 09:56:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:55.964 09:56:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:55.964 | select(.opcode=="crc32c") 00:27:55.964 | "\(.module_name) \(.executed)"' 00:27:55.964 09:56:19 -- host/digest.sh@94 -- # false 00:27:55.964 09:56:19 -- host/digest.sh@94 -- # exp_module=software 00:27:55.964 09:56:19 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:55.964 09:56:19 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:55.964 09:56:19 -- host/digest.sh@98 -- # killprocess 79808 00:27:55.964 09:56:19 -- common/autotest_common.sh@936 -- # '[' -z 79808 ']' 00:27:55.964 09:56:19 -- common/autotest_common.sh@940 -- # kill -0 79808 00:27:55.964 09:56:19 -- common/autotest_common.sh@941 -- # uname 00:27:56.223 09:56:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:56.223 09:56:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79808 00:27:56.223 09:56:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:56.223 09:56:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:56.223 killing process with pid 79808 00:27:56.223 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.223 00:27:56.223 Latency(us) 00:27:56.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.223 =================================================================================================================== 00:27:56.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.223 09:56:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79808' 00:27:56.223 09:56:19 -- common/autotest_common.sh@955 -- # kill 79808 00:27:56.223 09:56:19 -- common/autotest_common.sh@960 -- # wait 79808 00:27:57.180 09:56:20 -- host/digest.sh@132 -- # killprocess 79556 00:27:57.180 09:56:20 -- common/autotest_common.sh@936 -- # '[' -z 79556 ']' 00:27:57.180 09:56:20 -- common/autotest_common.sh@940 -- # kill -0 79556 00:27:57.180 09:56:20 -- common/autotest_common.sh@941 -- # uname 00:27:57.180 09:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:57.180 09:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79556 00:27:57.180 09:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:57.180 09:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:57.180 09:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79556' 00:27:57.180 killing process with pid 79556 00:27:57.180 09:56:20 -- common/autotest_common.sh@955 -- # kill 79556 00:27:57.180 09:56:20 -- common/autotest_common.sh@960 -- # wait 79556 00:27:58.558 ************************************ 00:27:58.558 END TEST nvmf_digest_clean 00:27:58.558 ************************************ 00:27:58.558 00:27:58.558 real 0m25.115s 00:27:58.558 user 0m48.004s 00:27:58.558 sys 0m5.028s 00:27:58.558 09:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:58.558 09:56:21 -- common/autotest_common.sh@10 -- # set +x 00:27:58.558 09:56:21 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:58.558 09:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:58.558 09:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:58.558 09:56:21 -- common/autotest_common.sh@10 -- # set +x 00:27:58.558 ************************************ 00:27:58.558 START TEST nvmf_digest_error 00:27:58.558 ************************************ 00:27:58.558 09:56:21 -- common/autotest_common.sh@1111 -- # run_digest_error 00:27:58.558 09:56:21 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:58.558 09:56:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:58.558 09:56:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:58.558 09:56:21 -- common/autotest_common.sh@10 -- # set +x 00:27:58.558 09:56:21 -- nvmf/common.sh@470 -- # nvmfpid=79926 00:27:58.558 09:56:21 -- nvmf/common.sh@471 -- # waitforlisten 79926 00:27:58.558 09:56:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:58.558 09:56:21 -- common/autotest_common.sh@817 -- # '[' -z 79926 ']' 00:27:58.558 09:56:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.558 09:56:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:58.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.558 09:56:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.558 09:56:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:58.558 09:56:21 -- common/autotest_common.sh@10 -- # set +x 00:27:58.817 [2024-04-18 09:56:22.026438] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:58.817 [2024-04-18 09:56:22.026617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.817 [2024-04-18 09:56:22.199858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.076 [2024-04-18 09:56:22.440787] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.076 [2024-04-18 09:56:22.440872] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.076 [2024-04-18 09:56:22.440908] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.076 [2024-04-18 09:56:22.440936] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.076 [2024-04-18 09:56:22.440951] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.076 [2024-04-18 09:56:22.440996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.652 09:56:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:59.652 09:56:22 -- common/autotest_common.sh@850 -- # return 0 00:27:59.652 09:56:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:59.652 09:56:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:59.652 09:56:22 -- common/autotest_common.sh@10 -- # set +x 00:27:59.934 09:56:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.934 09:56:23 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:59.934 09:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.934 09:56:23 -- common/autotest_common.sh@10 -- # set +x 00:27:59.934 [2024-04-18 09:56:23.045939] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:59.934 09:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.934 09:56:23 -- host/digest.sh@105 -- # common_target_config 00:27:59.934 09:56:23 -- host/digest.sh@43 -- # rpc_cmd 00:27:59.934 09:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.934 09:56:23 -- common/autotest_common.sh@10 -- # set +x 00:28:00.193 null0 00:28:00.193 [2024-04-18 09:56:23.390844] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.193 [2024-04-18 09:56:23.414984] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.193 09:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.193 09:56:23 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:00.193 09:56:23 -- host/digest.sh@54 -- # local rw bs qd 00:28:00.193 09:56:23 -- host/digest.sh@56 -- # rw=randread 00:28:00.193 09:56:23 -- host/digest.sh@56 -- # bs=4096 00:28:00.193 09:56:23 -- host/digest.sh@56 -- # qd=128 00:28:00.193 09:56:23 -- host/digest.sh@58 -- # bperfpid=79958 00:28:00.193 09:56:23 -- host/digest.sh@60 -- # waitforlisten 79958 /var/tmp/bperf.sock 00:28:00.193 09:56:23 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:00.193 09:56:23 -- common/autotest_common.sh@817 -- # '[' -z 79958 ']' 00:28:00.193 09:56:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.193 09:56:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:00.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.193 09:56:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.193 09:56:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:00.193 09:56:23 -- common/autotest_common.sh@10 -- # set +x 00:28:00.193 [2024-04-18 09:56:23.525868] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:00.193 [2024-04-18 09:56:23.526041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79958 ] 00:28:00.451 [2024-04-18 09:56:23.705972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.710 [2024-04-18 09:56:23.984709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.277 09:56:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:01.277 09:56:24 -- common/autotest_common.sh@850 -- # return 0 00:28:01.277 09:56:24 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.277 09:56:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.535 09:56:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:01.535 09:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.535 09:56:24 -- common/autotest_common.sh@10 -- # set +x 00:28:01.535 09:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.535 09:56:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.535 09:56:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.793 nvme0n1 00:28:01.793 09:56:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:01.793 09:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.793 09:56:25 -- common/autotest_common.sh@10 -- # set +x 00:28:01.794 09:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.794 09:56:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:01.794 09:56:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.052 Running I/O for 2 seconds... 00:28:02.052 [2024-04-18 09:56:25.327493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.052 [2024-04-18 09:56:25.327615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.052 [2024-04-18 09:56:25.327640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.052 [2024-04-18 09:56:25.350483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.052 [2024-04-18 09:56:25.350535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.052 [2024-04-18 09:56:25.350560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.052 [2024-04-18 09:56:25.372579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.052 [2024-04-18 09:56:25.372651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.052 [2024-04-18 09:56:25.372672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.052 [2024-04-18 09:56:25.396150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.052 [2024-04-18 09:56:25.396241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.052 [2024-04-18 09:56:25.396272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.052 [2024-04-18 09:56:25.421368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.052 [2024-04-18 09:56:25.421475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.052 [2024-04-18 09:56:25.421499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.445095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.445186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.311 [2024-04-18 09:56:25.445223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.467898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.467968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.311 [2024-04-18 09:56:25.467988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.492401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.492487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.311 [2024-04-18 09:56:25.492514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.516532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.516601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.311 [2024-04-18 09:56:25.516624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.540985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.541116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.311 [2024-04-18 09:56:25.541156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.565075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.565147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.311 [2024-04-18 09:56:25.565171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.311 [2024-04-18 09:56:25.588135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.311 [2024-04-18 09:56:25.588207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.312 [2024-04-18 09:56:25.588233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.312 [2024-04-18 09:56:25.613377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.312 [2024-04-18 09:56:25.613498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.312 [2024-04-18 09:56:25.613521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.312 [2024-04-18 09:56:25.638540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.312 [2024-04-18 09:56:25.638627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.312 [2024-04-18 09:56:25.638654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.312 [2024-04-18 09:56:25.662762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.312 [2024-04-18 09:56:25.662893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.312 [2024-04-18 09:56:25.662927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.312 [2024-04-18 09:56:25.687274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.312 [2024-04-18 09:56:25.687567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.312 [2024-04-18 09:56:25.687720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.712318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.712686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.712821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.736033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.736304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.736340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.759007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.759101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.759139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.782365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.782462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.782493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.805291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.805388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.805412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.829738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.829825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.829852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.854278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.854386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.854409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.570 [2024-04-18 09:56:25.877952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.570 [2024-04-18 09:56:25.878051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.570 [2024-04-18 09:56:25.878088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.571 [2024-04-18 09:56:25.902493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.571 [2024-04-18 09:56:25.902648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.571 [2024-04-18 09:56:25.902671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.571 [2024-04-18 09:56:25.924875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.571 [2024-04-18 09:56:25.924939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.571 [2024-04-18 09:56:25.924963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.571 [2024-04-18 09:56:25.946606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.571 [2024-04-18 09:56:25.946692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.571 [2024-04-18 09:56:25.946714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:25.969144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:25.969213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:25.969239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:25.991684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:25.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:25.991788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.013929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.014017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.014057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.035530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.035588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.035624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.058187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.058314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.058343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.080767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.080862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.080885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.105070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.105198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.105240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.128270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.128369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.128390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.149457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.149536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.149573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.169776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.169872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.190826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.190943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.190966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.830 [2024-04-18 09:56:26.212960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:02.830 [2024-04-18 09:56:26.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.830 [2024-04-18 09:56:26.213085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.236102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.236221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.236252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.259990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.260078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.260100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.282422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.282535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.282562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.304851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.304924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.304947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.327822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.327932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.327959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.351035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.351115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.351150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.374390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.374501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.374527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.396172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.396270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.396292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.419410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.419531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.419558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.441040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.441156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.441180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.089 [2024-04-18 09:56:26.463429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.089 [2024-04-18 09:56:26.463491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.089 [2024-04-18 09:56:26.463517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.487318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.487410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.487434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.510663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.510735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.510767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.533502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.533577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.533600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.557623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.557744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.557772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.583132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.583258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.583283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.605956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.606031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.606056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.627641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.627795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.627819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.651260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.651342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.651379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.674645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.674773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.674796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.698934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.699028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.699056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.348 [2024-04-18 09:56:26.723333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.348 [2024-04-18 09:56:26.723439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.348 [2024-04-18 09:56:26.723463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.607 [2024-04-18 09:56:26.747373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.607 [2024-04-18 09:56:26.747469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.607 [2024-04-18 09:56:26.747495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.607 [2024-04-18 09:56:26.770547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.607 [2024-04-18 09:56:26.770612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.607 [2024-04-18 09:56:26.770634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.803362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.803457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.803480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.827185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.827258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.827284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.850083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.850174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.850198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.873545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.873628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.873659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.898840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.898967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.898992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.923532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.923621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.923648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.947866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.947994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.948018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.970983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.971052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.971092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.608 [2024-04-18 09:56:26.994401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.608 [2024-04-18 09:56:26.994491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.608 [2024-04-18 09:56:26.994514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.019467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.019561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.019589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.043540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.043633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.043656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.066100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.066187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.066214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.088649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.088756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.088779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.111115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.111197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.111233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.133942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.134065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.134088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.157544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.157667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.157694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.181428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.181522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.181546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.204271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.204366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.204408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.228619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.228720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.228756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.877 [2024-04-18 09:56:27.252994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:03.877 [2024-04-18 09:56:27.253098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.877 [2024-04-18 09:56:27.253136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.135 [2024-04-18 09:56:27.276229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:04.135 [2024-04-18 09:56:27.276329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.135 [2024-04-18 09:56:27.276361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.135 [2024-04-18 09:56:27.297325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:04.135 [2024-04-18 09:56:27.297406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.135 [2024-04-18 09:56:27.297438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.135 00:28:04.135 Latency(us) 00:28:04.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.135 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:04.135 nvme0n1 : 2.01 10822.27 42.27 0.00 0.00 11817.16 9472.93 45517.73 00:28:04.135 =================================================================================================================== 00:28:04.135 Total : 10822.27 42.27 0.00 0.00 11817.16 9472.93 45517.73 00:28:04.135 0 00:28:04.135 09:56:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:04.135 09:56:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:04.135 09:56:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:04.135 | .driver_specific 00:28:04.135 | .nvme_error 00:28:04.135 | .status_code 00:28:04.135 | .command_transient_transport_error' 00:28:04.135 09:56:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:04.393 09:56:27 -- host/digest.sh@71 -- # (( 85 > 0 )) 00:28:04.393 09:56:27 -- host/digest.sh@73 -- # killprocess 79958 00:28:04.393 09:56:27 -- common/autotest_common.sh@936 -- # '[' -z 79958 ']' 00:28:04.393 09:56:27 -- common/autotest_common.sh@940 -- # kill -0 79958 00:28:04.393 09:56:27 -- common/autotest_common.sh@941 -- # uname 00:28:04.393 09:56:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:04.393 09:56:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79958 00:28:04.393 09:56:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:04.393 09:56:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:04.393 killing process with pid 79958 00:28:04.393 09:56:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79958' 00:28:04.393 09:56:27 -- common/autotest_common.sh@955 -- # kill 79958 00:28:04.393 Received shutdown signal, test time was about 2.000000 seconds 00:28:04.393 00:28:04.393 Latency(us) 00:28:04.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.393 =================================================================================================================== 00:28:04.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.393 09:56:27 -- common/autotest_common.sh@960 -- # wait 79958 00:28:05.329 09:56:28 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:05.329 09:56:28 -- host/digest.sh@54 -- # local rw bs qd 00:28:05.329 09:56:28 -- host/digest.sh@56 -- # rw=randread 00:28:05.329 09:56:28 -- host/digest.sh@56 -- # bs=131072 00:28:05.329 09:56:28 -- host/digest.sh@56 -- # qd=16 00:28:05.329 09:56:28 -- host/digest.sh@58 -- # bperfpid=80025 00:28:05.329 09:56:28 -- host/digest.sh@60 -- # waitforlisten 80025 /var/tmp/bperf.sock 00:28:05.329 09:56:28 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:05.329 09:56:28 -- common/autotest_common.sh@817 -- # '[' -z 80025 ']' 00:28:05.329 09:56:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.329 09:56:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:05.329 09:56:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.329 09:56:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:05.329 09:56:28 -- common/autotest_common.sh@10 -- # set +x 00:28:05.588 [2024-04-18 09:56:28.787861] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:05.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.588 Zero copy mechanism will not be used. 00:28:05.588 [2024-04-18 09:56:28.788031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80025 ] 00:28:05.588 [2024-04-18 09:56:28.962026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.846 [2024-04-18 09:56:29.201410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.414 09:56:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:06.414 09:56:29 -- common/autotest_common.sh@850 -- # return 0 00:28:06.414 09:56:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.414 09:56:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.674 09:56:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:06.674 09:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.674 09:56:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.674 09:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.674 09:56:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.674 09:56:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.933 nvme0n1 00:28:06.933 09:56:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:06.933 09:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.933 09:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:06.933 09:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.933 09:56:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:06.933 09:56:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.192 Zero copy mechanism will not be used. 00:28:07.192 Running I/O for 2 seconds... 00:28:07.192 [2024-04-18 09:56:30.411163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.192 [2024-04-18 09:56:30.411243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.192 [2024-04-18 09:56:30.411268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.192 [2024-04-18 09:56:30.416912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.192 [2024-04-18 09:56:30.416993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.192 [2024-04-18 09:56:30.417016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.192 [2024-04-18 09:56:30.422511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.192 [2024-04-18 09:56:30.422563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.422588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.428157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.428207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.428247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.433676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.433763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.433785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.439245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.439306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.439327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.444742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.444792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.444832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.450462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.450513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.450553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.455971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.456022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.456047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.461393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.461452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.461474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.466933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.467001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.467022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.472516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.472584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.472612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.478075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.478143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.478170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.483661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.483750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.483771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.489195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.489254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.489275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.494659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.494709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.494748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.500218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.500268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.500292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.505628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.505688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.505708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.511022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.511080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.516379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.516444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.516465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.521700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.521763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.527184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.527235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.527260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.532641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.532702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.532722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.538066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.538138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.538170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.543627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.543677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.543701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.549206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.549258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.549281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.554720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.554779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.554799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.560420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.560482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.193 [2024-04-18 09:56:30.560503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.193 [2024-04-18 09:56:30.566001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.193 [2024-04-18 09:56:30.566052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.194 [2024-04-18 09:56:30.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.194 [2024-04-18 09:56:30.571526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.194 [2024-04-18 09:56:30.571577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.194 [2024-04-18 09:56:30.571601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.194 [2024-04-18 09:56:30.577073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.194 [2024-04-18 09:56:30.577150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.194 [2024-04-18 09:56:30.577191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.194 [2024-04-18 09:56:30.582684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.194 [2024-04-18 09:56:30.582749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.194 [2024-04-18 09:56:30.582770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.454 [2024-04-18 09:56:30.588299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.454 [2024-04-18 09:56:30.588363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.454 [2024-04-18 09:56:30.588384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.454 [2024-04-18 09:56:30.593903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.454 [2024-04-18 09:56:30.593956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.454 [2024-04-18 09:56:30.593982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.454 [2024-04-18 09:56:30.599523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.454 [2024-04-18 09:56:30.599573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.454 [2024-04-18 09:56:30.599597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.454 [2024-04-18 09:56:30.605057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.454 [2024-04-18 09:56:30.605119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.454 [2024-04-18 09:56:30.605154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.454 [2024-04-18 09:56:30.610609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.454 [2024-04-18 09:56:30.610669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.454 [2024-04-18 09:56:30.610690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.454 [2024-04-18 09:56:30.616254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.454 [2024-04-18 09:56:30.616313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.454 [2024-04-18 09:56:30.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.621829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.621879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.621919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.627304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.627356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.627380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.632690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.632764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.638300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.638360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.638382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.644096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.644175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.644207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.649942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.650026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.650052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.655771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.655857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.655887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.661617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.661699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.661722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.667381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.667444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.667466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.673038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.673091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.673137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.678662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.678714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.678742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.684419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.684476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.684497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.690098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.690167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.690189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.695737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.695788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.695813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.701438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.701490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.701531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.707236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.707330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.707361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.712986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.713059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.713082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.718849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.718952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.718974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.724669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.724725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.724749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.730312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.730362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.730386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.735883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.735943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.735964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.741709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.741784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.741805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.747404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.747466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.747507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.753039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.753088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.455 [2024-04-18 09:56:30.753128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.455 [2024-04-18 09:56:30.758698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.455 [2024-04-18 09:56:30.758766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.758790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.764227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.764298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.764319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.769897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.770016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.775621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.775689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.775732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.781372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.781442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.781467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.787239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.787310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.787332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.792945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.793037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.793058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.798755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.798835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.798856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.804558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.804614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.804656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.810231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.810283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.810308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.815679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.815740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.815762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.821272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.821364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.821384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.826939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.826989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.827014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.832489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.832539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.832563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.838009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.838067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.838089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.456 [2024-04-18 09:56:30.843498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.456 [2024-04-18 09:56:30.843557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.456 [2024-04-18 09:56:30.843578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.717 [2024-04-18 09:56:30.848924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.717 [2024-04-18 09:56:30.848981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.717 [2024-04-18 09:56:30.849002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.717 [2024-04-18 09:56:30.854376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.717 [2024-04-18 09:56:30.854426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.717 [2024-04-18 09:56:30.854451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.717 [2024-04-18 09:56:30.859749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.717 [2024-04-18 09:56:30.859799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.717 [2024-04-18 09:56:30.859823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.717 [2024-04-18 09:56:30.865226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.865281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.865302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.870689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.870783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.870804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.876247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.876295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.876347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.881740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.881789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.881829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.887115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.887185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.887206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.892468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.892528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.892549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.897952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.898019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.898044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.903479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.903527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.903567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.908866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.908929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.908954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.914443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.914511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.914533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.920262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.920331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.920353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.926093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.926176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.926203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.932076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.932169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.932195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.937885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.937950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.937971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.943799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.943875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.943897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.949742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.949817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.949838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.955555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.955604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.955643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.961346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.961411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.961435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.967085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.967156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.967178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.972888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.972981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.973003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.718 [2024-04-18 09:56:30.978808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.718 [2024-04-18 09:56:30.978868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.718 [2024-04-18 09:56:30.978901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:30.984536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:30.984585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:30.984609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:30.990204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:30.990261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:30.990286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:30.995845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:30.995907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:30.995929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.001426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.001499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.001526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.007020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.007071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.007095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.012691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.012741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.012781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.018282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.018343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.018364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.023840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.023921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.023943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.029505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.029565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.029585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.035186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.035237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.035261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.040872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.040935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.040960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.046435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.046493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.046513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.051929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.051989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.052010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.057441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.057490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.057523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.062990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.063041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.063066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.068488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.068548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.068569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.073947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.074009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.074031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.079392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.079445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.079469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.084816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.084891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.090434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.090490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.090514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.095906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.095967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.095988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.101433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.101485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.101505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.719 [2024-04-18 09:56:31.106984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.719 [2024-04-18 09:56:31.107037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.719 [2024-04-18 09:56:31.107058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.112570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.112630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.112650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.118337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.118403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.118423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.123984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.124037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.124058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.129594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.129655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.129676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.135417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.135475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.135496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.141213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.141271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.141292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.147044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.147108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.147146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.152834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.152884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.152919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.158452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.158502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.158538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.163973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.164023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.164059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.169536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.169584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.169620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.175061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.175112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.175148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.180557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.180608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.180628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.186010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.186072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.186092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.191571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.191621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.191657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.197039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.197091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.202489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.202545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.202565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.208116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.208179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.208200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.213728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.213780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.213817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.219342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.981 [2024-04-18 09:56:31.219410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.981 [2024-04-18 09:56:31.219453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.981 [2024-04-18 09:56:31.224938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.224991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.225028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.230532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.230588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.230609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.236250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.236301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.236338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.241725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.241777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.241798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.247245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.247315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.252763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.252812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.252849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.258283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.258352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.258374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.263840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.263891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.263912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.269254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.269303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.269338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.274740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.274790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.274828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.280361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.280412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.280432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.285864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.285914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.285949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.291310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.291363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.291382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.296753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.296803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.296861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.302250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.302315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.302350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.307582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.307632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.307652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.312877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.312928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.312949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.318349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.318397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.318431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.323786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.323835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.323870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.329394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.329442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.329477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.334960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.335011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.335031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.340397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.340448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.345757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.345808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.345828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.351224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.351275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.982 [2024-04-18 09:56:31.351296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.982 [2024-04-18 09:56:31.356720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.982 [2024-04-18 09:56:31.356771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.983 [2024-04-18 09:56:31.356807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.983 [2024-04-18 09:56:31.362312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.983 [2024-04-18 09:56:31.362383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.983 [2024-04-18 09:56:31.362405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.983 [2024-04-18 09:56:31.368098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:07.983 [2024-04-18 09:56:31.368170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.983 [2024-04-18 09:56:31.368191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.373766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.373841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.373862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.379481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.379539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.379561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.385223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.385273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.385293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.390753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.390801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.390837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.396305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.396354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.396375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.401905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.401958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.401978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.407423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.407473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.407493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.413085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.413172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.243 [2024-04-18 09:56:31.418710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.243 [2024-04-18 09:56:31.418763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.243 [2024-04-18 09:56:31.418783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.424334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.424396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.424417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.430071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.430176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.430200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.435888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.435983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.436005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.441589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.441648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.441687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.447394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.447454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.447475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.453071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.453141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.453164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.458748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.458802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.458837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.464511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.464560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.464597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.470109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.470167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.470203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.475848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.475898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.475918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.481251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.481302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.481321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.486729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.486778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.486814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.492240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.492289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.492309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.497631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.497706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.497742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.503307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.503372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.503393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.508992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.509058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.509080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.514899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.514960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.514981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.520596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.520647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.520667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.526289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.526376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.531950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.531998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.532033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.537581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.537654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.537675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.543177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.543236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.543257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.548650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.548728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.548749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.554118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.244 [2024-04-18 09:56:31.554211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.244 [2024-04-18 09:56:31.554233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.244 [2024-04-18 09:56:31.559674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.559731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.559751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.565098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.565166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.565188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.570773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.570852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.570886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.576455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.576511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.576548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.581930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.581986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.582023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.587534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.587627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.593084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.593148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.593170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.598450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.598500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.598535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.603897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.603949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.603969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.609271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.609323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.609342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.614566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.614615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.614652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.619926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.619974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.620011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.625310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.625375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.625411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.245 [2024-04-18 09:56:31.630612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.245 [2024-04-18 09:56:31.630662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.245 [2024-04-18 09:56:31.630683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.635916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.636002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.641302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.641373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.646839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.646923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.646944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.652441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.652499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.652536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.657918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.657969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.658006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.663357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.663405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.663442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.668656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.668704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.668740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.674109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.674173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.674194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.679611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.679661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.679681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.684930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.684981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.685002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.690303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.690353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.690373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.695723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.695773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.701182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.701235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.706688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.706752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.706774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.712324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.712393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.712414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.717956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.718028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.718050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.723604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.723673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.723696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.729144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.729207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.729229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.734635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.734688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.734708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.740156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.740217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.740238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.745664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.745728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.745780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.751220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.751270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.751290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.756628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.756676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.756712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.762268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.762317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.762337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.767851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.767903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.767924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.773330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.773379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.773426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.778810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.506 [2024-04-18 09:56:31.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.506 [2024-04-18 09:56:31.778910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.506 [2024-04-18 09:56:31.784223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.784274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.784294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.789635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.789685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.789705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.795034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.795086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.795106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.800491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.800554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.800574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.805883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.805935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.805955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.811393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.811442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.811462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.816776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.816827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.816863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.822355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.822416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.822437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.828191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.828262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.828285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.833891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.833987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.834009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.839671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.839746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.839782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.845502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.845565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.845588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.851226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.851286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.851308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.856925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.856973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.857009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.862527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.862579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.862599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.868110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.868171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.868192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.873510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.873561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.879094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.879158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.879178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.884656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.884719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.884753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.890265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.890314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.890334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.507 [2024-04-18 09:56:31.895506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.507 [2024-04-18 09:56:31.895557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.507 [2024-04-18 09:56:31.895577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.900895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.900947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.900968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.906345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.906395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.906415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.911902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.911951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.911987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.917502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.917551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.917587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.923105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.923171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.923192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.928678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.928728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.928750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.934102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.934163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.934183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.939516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.939568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.939588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.945092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.945155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.945177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.950608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.950659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.950679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.956272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.956323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.956342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.768 [2024-04-18 09:56:31.962038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.768 [2024-04-18 09:56:31.962096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.768 [2024-04-18 09:56:31.962117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:31.967907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:31.967962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:31.967993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:31.973467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:31.973516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:31.973536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:31.979072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:31.979143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:31.979165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:31.984630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:31.984695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:31.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:31.990172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:31.990222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:31.990243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:31.995743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:31.995794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:31.995830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.001344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.001394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.001421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.006784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.006832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.006916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.012361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.012411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.012431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.017827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.017878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.017899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.023313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.023362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.023382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.028649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.028700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.028721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.034114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.034177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.034197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.039549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.039599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.039635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.045009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.045086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.045106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.050641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.050695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.050727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.056301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.056388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.056431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.061862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.061913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.061933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.067483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.067535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.067555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.073043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.073099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.073120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.078558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.078610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.078630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.084198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.084251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.084271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.089792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.089845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.089866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.095485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.769 [2024-04-18 09:56:32.095554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.769 [2024-04-18 09:56:32.095575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.769 [2024-04-18 09:56:32.101310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.101389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.101412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.107268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.107345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.112986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.113046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.113067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.118597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.118651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.118672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.124228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.124297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.124319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.130095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.130174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.130196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.135701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.135759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.135780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.141322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.141373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.141394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.146969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.147020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.147040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.152538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.152589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.770 [2024-04-18 09:56:32.158066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:08.770 [2024-04-18 09:56:32.158118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.770 [2024-04-18 09:56:32.158152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.032 [2024-04-18 09:56:32.163572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.032 [2024-04-18 09:56:32.163622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.032 [2024-04-18 09:56:32.163643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.032 [2024-04-18 09:56:32.169069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.169159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.174730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.174793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.180446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.180522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.180559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.186117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.186180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.186201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.191819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.191879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.191899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.197509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.197572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.197593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.203262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.203324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.203346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.208805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.208859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.208880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.214540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.214593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.214614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.220044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.220097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.220117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.225613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.225665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.225685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.231311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.231363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.231383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.236844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.236917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.242465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.242517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.242538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.247865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.247940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.253401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.253451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.253472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.258917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.258975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.258995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.264316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.264365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.264393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.269782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.269832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.269852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.275235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.275285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.275306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.280723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.280777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.280798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.286406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.286503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.292305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.292371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.292393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.297873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.297925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.033 [2024-04-18 09:56:32.297962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.033 [2024-04-18 09:56:32.303498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.033 [2024-04-18 09:56:32.303549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.303569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.309021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.309072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.309093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.314516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.314567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.314587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.320117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.320181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.320209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.325650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.325700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.325720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.331102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.331164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.331185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.336640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.336703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.336724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.342357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.342416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.342436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.348034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.348093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.348113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.353594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.353645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.353681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.359277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.359329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.359350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.364763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.364815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.364835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.370250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.370307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.370334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.375755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.375827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.375850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.381359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.381415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.381435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.386800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.386853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.386884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.392366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.392440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.397807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.397861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.397882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.034 [2024-04-18 09:56:32.403111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007440) 00:28:09.034 [2024-04-18 09:56:32.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.034 [2024-04-18 09:56:32.403196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.034 00:28:09.034 Latency(us) 00:28:09.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.034 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:09.034 nvme0n1 : 2.00 5544.07 693.01 0.00 0.00 2881.37 2502.28 6821.70 00:28:09.034 =================================================================================================================== 00:28:09.034 Total : 5544.07 693.01 0.00 0.00 2881.37 2502.28 6821.70 00:28:09.034 0 00:28:09.321 09:56:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:09.321 09:56:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:09.321 09:56:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:09.321 09:56:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:09.321 | .driver_specific 00:28:09.321 | .nvme_error 00:28:09.321 | .status_code 00:28:09.321 | .command_transient_transport_error' 00:28:09.580 09:56:32 -- host/digest.sh@71 -- # (( 358 > 0 )) 00:28:09.580 09:56:32 -- host/digest.sh@73 -- # killprocess 80025 00:28:09.580 09:56:32 -- common/autotest_common.sh@936 -- # '[' -z 80025 ']' 00:28:09.580 09:56:32 -- common/autotest_common.sh@940 -- # kill -0 80025 00:28:09.580 09:56:32 -- common/autotest_common.sh@941 -- # uname 00:28:09.580 09:56:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.580 09:56:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80025 00:28:09.580 09:56:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:09.580 09:56:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:09.580 killing process with pid 80025 00:28:09.580 09:56:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80025' 00:28:09.580 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.580 00:28:09.580 Latency(us) 00:28:09.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.580 =================================================================================================================== 00:28:09.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.580 09:56:32 -- common/autotest_common.sh@955 -- # kill 80025 00:28:09.580 09:56:32 -- common/autotest_common.sh@960 -- # wait 80025 00:28:10.957 09:56:33 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:10.957 09:56:33 -- host/digest.sh@54 -- # local rw bs qd 00:28:10.957 09:56:33 -- host/digest.sh@56 -- # rw=randwrite 00:28:10.957 09:56:33 -- host/digest.sh@56 -- # bs=4096 00:28:10.957 09:56:33 -- host/digest.sh@56 -- # qd=128 00:28:10.957 09:56:33 -- host/digest.sh@58 -- # bperfpid=80096 00:28:10.957 09:56:33 -- host/digest.sh@60 -- # waitforlisten 80096 /var/tmp/bperf.sock 00:28:10.957 09:56:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:10.957 09:56:33 -- common/autotest_common.sh@817 -- # '[' -z 80096 ']' 00:28:10.957 09:56:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.957 09:56:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:10.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.957 09:56:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.957 09:56:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:10.957 09:56:33 -- common/autotest_common.sh@10 -- # set +x 00:28:10.957 [2024-04-18 09:56:34.092147] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:10.957 [2024-04-18 09:56:34.092407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80096 ] 00:28:10.957 [2024-04-18 09:56:34.258405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.217 [2024-04-18 09:56:34.496976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.784 09:56:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:11.784 09:56:34 -- common/autotest_common.sh@850 -- # return 0 00:28:11.784 09:56:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:11.784 09:56:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.042 09:56:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.042 09:56:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.042 09:56:35 -- common/autotest_common.sh@10 -- # set +x 00:28:12.042 09:56:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.042 09:56:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.042 09:56:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.301 nvme0n1 00:28:12.301 09:56:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:12.301 09:56:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.301 09:56:35 -- common/autotest_common.sh@10 -- # set +x 00:28:12.301 09:56:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.301 09:56:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.301 09:56:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.301 Running I/O for 2 seconds... 00:28:12.301 [2024-04-18 09:56:35.651115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:28:12.301 [2024-04-18 09:56:35.654453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.301 [2024-04-18 09:56:35.654515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.301 [2024-04-18 09:56:35.671757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:28:12.301 [2024-04-18 09:56:35.674933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.301 [2024-04-18 09:56:35.674987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.301 [2024-04-18 09:56:35.691920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:28:12.559 [2024-04-18 09:56:35.695218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.695268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.712706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:28:12.559 [2024-04-18 09:56:35.716000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.716056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.733789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:28:12.559 [2024-04-18 09:56:35.737009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.737062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.755480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:28:12.559 [2024-04-18 09:56:35.758712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.758770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.777288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:28:12.559 [2024-04-18 09:56:35.780518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.780576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.799004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:28:12.559 [2024-04-18 09:56:35.802159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.802215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.819939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb048 00:28:12.559 [2024-04-18 09:56:35.823518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.823571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.843570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:28:12.559 [2024-04-18 09:56:35.846649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.846710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.869143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:28:12.559 [2024-04-18 09:56:35.873032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.873107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:12.559 [2024-04-18 09:56:35.890944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:28:12.559 [2024-04-18 09:56:35.893979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.559 [2024-04-18 09:56:35.894044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.560 [2024-04-18 09:56:35.912923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:28:12.560 [2024-04-18 09:56:35.915982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.560 [2024-04-18 09:56:35.916053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:12.560 [2024-04-18 09:56:35.934447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:28:12.560 [2024-04-18 09:56:35.937368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.560 [2024-04-18 09:56:35.937434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:35.955783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:28:12.818 [2024-04-18 09:56:35.958776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:35.958837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:35.976855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:28:12.818 [2024-04-18 09:56:35.979757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:35.979818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:35.997787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:28:12.818 [2024-04-18 09:56:36.000797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:36.000864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:36.019147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:28:12.818 [2024-04-18 09:56:36.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:36.022036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:36.040338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:28:12.818 [2024-04-18 09:56:36.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:36.043256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:36.061693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:28:12.818 [2024-04-18 09:56:36.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:36.064556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:12.818 [2024-04-18 09:56:36.083093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:28:12.818 [2024-04-18 09:56:36.085906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.818 [2024-04-18 09:56:36.085979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:12.819 [2024-04-18 09:56:36.105602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:28:12.819 [2024-04-18 09:56:36.108480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.819 [2024-04-18 09:56:36.108569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:12.819 [2024-04-18 09:56:36.127812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:28:12.819 [2024-04-18 09:56:36.130659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.819 [2024-04-18 09:56:36.130738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:12.819 [2024-04-18 09:56:36.148549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:28:12.819 [2024-04-18 09:56:36.151228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.819 [2024-04-18 09:56:36.151288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:12.819 [2024-04-18 09:56:36.170319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:28:12.819 [2024-04-18 09:56:36.173087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.819 [2024-04-18 09:56:36.173172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:12.819 [2024-04-18 09:56:36.191694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:28:12.819 [2024-04-18 09:56:36.194288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.819 [2024-04-18 09:56:36.194349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.212522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:28:13.077 [2024-04-18 09:56:36.215255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.215314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.233501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:28:13.077 [2024-04-18 09:56:36.236056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.236114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.254215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:28:13.077 [2024-04-18 09:56:36.256757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.256821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.275854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:28:13.077 [2024-04-18 09:56:36.278516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.278585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.297387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:28:13.077 [2024-04-18 09:56:36.299996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.300061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.319100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:28:13.077 [2024-04-18 09:56:36.321612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.321691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.339810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:28:13.077 [2024-04-18 09:56:36.342222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.342283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.360820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:28:13.077 [2024-04-18 09:56:36.363344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.363417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.382692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:28:13.077 [2024-04-18 09:56:36.385150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.385223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.403319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:28:13.077 [2024-04-18 09:56:36.405672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.405732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.077 [2024-04-18 09:56:36.424940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:28:13.077 [2024-04-18 09:56:36.427429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.077 [2024-04-18 09:56:36.427493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.078 [2024-04-18 09:56:36.446778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:28:13.078 [2024-04-18 09:56:36.449116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.078 [2024-04-18 09:56:36.449191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.078 [2024-04-18 09:56:36.467852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:28:13.336 [2024-04-18 09:56:36.470303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.336 [2024-04-18 09:56:36.470369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.336 [2024-04-18 09:56:36.490154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:28:13.336 [2024-04-18 09:56:36.492514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.336 [2024-04-18 09:56:36.492584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.336 [2024-04-18 09:56:36.511915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:28:13.336 [2024-04-18 09:56:36.514239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.514310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.533858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:28:13.337 [2024-04-18 09:56:36.536093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.536162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.554486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9168 00:28:13.337 [2024-04-18 09:56:36.556628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.556683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.575432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:28:13.337 [2024-04-18 09:56:36.577644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.577704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.596307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:28:13.337 [2024-04-18 09:56:36.598463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.598526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.617107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:28:13.337 [2024-04-18 09:56:36.619218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.619276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.637874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:28:13.337 [2024-04-18 09:56:36.639950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.640007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.658825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:28:13.337 [2024-04-18 09:56:36.660958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.661013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.680067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:28:13.337 [2024-04-18 09:56:36.682170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.682248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.702536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:28:13.337 [2024-04-18 09:56:36.704669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.704757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.337 [2024-04-18 09:56:36.724764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:28:13.337 [2024-04-18 09:56:36.726893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.337 [2024-04-18 09:56:36.726961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.746129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:28:13.595 [2024-04-18 09:56:36.748120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.748203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.766613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:28:13.595 [2024-04-18 09:56:36.768532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.768624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.787049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:28:13.595 [2024-04-18 09:56:36.788917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.789004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.807389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:28:13.595 [2024-04-18 09:56:36.809239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.809300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.828533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:28:13.595 [2024-04-18 09:56:36.830573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.830658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.850045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:28:13.595 [2024-04-18 09:56:36.851860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.851944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.870909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:28:13.595 [2024-04-18 09:56:36.872732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.872787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.891611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:28:13.595 [2024-04-18 09:56:36.893388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.893449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.912389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:13.595 [2024-04-18 09:56:36.914164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.933022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:28:13.595 [2024-04-18 09:56:36.934790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.934864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.595 [2024-04-18 09:56:36.954077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df118 00:28:13.595 [2024-04-18 09:56:36.955854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.595 [2024-04-18 09:56:36.955924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.596 [2024-04-18 09:56:36.976454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:28:13.596 [2024-04-18 09:56:36.978223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.596 [2024-04-18 09:56:36.978295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:36.998934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:28:13.853 [2024-04-18 09:56:37.000614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.000676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.028720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:28:13.853 [2024-04-18 09:56:37.032002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.032058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.049791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:28:13.853 [2024-04-18 09:56:37.053101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.053161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.071210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df118 00:28:13.853 [2024-04-18 09:56:37.074483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.074538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.093110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:28:13.853 [2024-04-18 09:56:37.096444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.096498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.114371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:13.853 [2024-04-18 09:56:37.117622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.117691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.135937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:28:13.853 [2024-04-18 09:56:37.139149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.139201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.157066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:28:13.853 [2024-04-18 09:56:37.160195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.160248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.177898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:28:13.853 [2024-04-18 09:56:37.181069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.181143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.200221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:28:13.853 [2024-04-18 09:56:37.203618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.203677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.222089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:28:13.853 [2024-04-18 09:56:37.225156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.853 [2024-04-18 09:56:37.225225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.853 [2024-04-18 09:56:37.242857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:28:14.111 [2024-04-18 09:56:37.245966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.111 [2024-04-18 09:56:37.246028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.111 [2024-04-18 09:56:37.263877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:28:14.111 [2024-04-18 09:56:37.267156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.111 [2024-04-18 09:56:37.267210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:14.111 [2024-04-18 09:56:37.286628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:28:14.111 [2024-04-18 09:56:37.289861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.111 [2024-04-18 09:56:37.289933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.308668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:28:14.112 [2024-04-18 09:56:37.311631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.311685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.329382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:28:14.112 [2024-04-18 09:56:37.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.332305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.349655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:28:14.112 [2024-04-18 09:56:37.352533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.352597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.369825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:28:14.112 [2024-04-18 09:56:37.372673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.372734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.390420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:28:14.112 [2024-04-18 09:56:37.393191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.393252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.411365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:28:14.112 [2024-04-18 09:56:37.414249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.414320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.433382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:28:14.112 [2024-04-18 09:56:37.436378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.436435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.455616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:28:14.112 [2024-04-18 09:56:37.458423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.458491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.476073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9168 00:28:14.112 [2024-04-18 09:56:37.478725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.478787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:14.112 [2024-04-18 09:56:37.496958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:28:14.112 [2024-04-18 09:56:37.500040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.112 [2024-04-18 09:56:37.500095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:14.370 [2024-04-18 09:56:37.519651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:28:14.370 [2024-04-18 09:56:37.522469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.370 [2024-04-18 09:56:37.522525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:14.370 [2024-04-18 09:56:37.541685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:28:14.370 [2024-04-18 09:56:37.544516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.370 [2024-04-18 09:56:37.544586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:14.370 [2024-04-18 09:56:37.563456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:28:14.370 [2024-04-18 09:56:37.566059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.370 [2024-04-18 09:56:37.566140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:14.370 [2024-04-18 09:56:37.584018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:28:14.370 [2024-04-18 09:56:37.586561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.370 [2024-04-18 09:56:37.586623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:14.370 [2024-04-18 09:56:37.604150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:28:14.370 [2024-04-18 09:56:37.606616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.370 [2024-04-18 09:56:37.606679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:14.370 00:28:14.370 Latency(us) 00:28:14.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.370 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:14.370 nvme0n1 : 2.00 11818.93 46.17 0.00 0.00 10819.97 9353.77 39798.23 00:28:14.370 =================================================================================================================== 00:28:14.370 Total : 11818.93 46.17 0.00 0.00 10819.97 9353.77 39798.23 00:28:14.370 0 00:28:14.370 09:56:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:14.370 09:56:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:14.370 09:56:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:14.370 09:56:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:14.370 | .driver_specific 00:28:14.370 | .nvme_error 00:28:14.370 | .status_code 00:28:14.370 | .command_transient_transport_error' 00:28:14.629 09:56:37 -- host/digest.sh@71 -- # (( 92 > 0 )) 00:28:14.629 09:56:37 -- host/digest.sh@73 -- # killprocess 80096 00:28:14.629 09:56:37 -- common/autotest_common.sh@936 -- # '[' -z 80096 ']' 00:28:14.629 09:56:37 -- common/autotest_common.sh@940 -- # kill -0 80096 00:28:14.629 09:56:37 -- common/autotest_common.sh@941 -- # uname 00:28:14.629 09:56:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:14.629 09:56:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80096 00:28:14.629 09:56:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:14.629 09:56:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:14.629 killing process with pid 80096 00:28:14.629 09:56:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80096' 00:28:14.629 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.629 00:28:14.629 Latency(us) 00:28:14.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.629 =================================================================================================================== 00:28:14.629 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.629 09:56:37 -- common/autotest_common.sh@955 -- # kill 80096 00:28:14.629 09:56:37 -- common/autotest_common.sh@960 -- # wait 80096 00:28:16.040 09:56:38 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:16.040 09:56:38 -- host/digest.sh@54 -- # local rw bs qd 00:28:16.040 09:56:38 -- host/digest.sh@56 -- # rw=randwrite 00:28:16.040 09:56:38 -- host/digest.sh@56 -- # bs=131072 00:28:16.040 09:56:38 -- host/digest.sh@56 -- # qd=16 00:28:16.040 09:56:38 -- host/digest.sh@58 -- # bperfpid=80163 00:28:16.040 09:56:38 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:16.040 09:56:38 -- host/digest.sh@60 -- # waitforlisten 80163 /var/tmp/bperf.sock 00:28:16.040 09:56:38 -- common/autotest_common.sh@817 -- # '[' -z 80163 ']' 00:28:16.040 09:56:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.040 09:56:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.040 09:56:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.040 09:56:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:16.040 09:56:38 -- common/autotest_common.sh@10 -- # set +x 00:28:16.040 [2024-04-18 09:56:39.064268] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:16.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.040 Zero copy mechanism will not be used. 00:28:16.040 [2024-04-18 09:56:39.064868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80163 ] 00:28:16.040 [2024-04-18 09:56:39.234734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.298 [2024-04-18 09:56:39.501861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.863 09:56:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:16.863 09:56:39 -- common/autotest_common.sh@850 -- # return 0 00:28:16.863 09:56:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.864 09:56:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.864 09:56:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.864 09:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.864 09:56:40 -- common/autotest_common.sh@10 -- # set +x 00:28:16.864 09:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.864 09:56:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.864 09:56:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.121 nvme0n1 00:28:17.379 09:56:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:17.379 09:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.379 09:56:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.379 09:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.380 09:56:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.380 09:56:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.380 Zero copy mechanism will not be used. 00:28:17.380 Running I/O for 2 seconds... 00:28:17.380 [2024-04-18 09:56:40.669802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.670287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.670348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.678583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.678995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.679056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.687237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.687622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.687669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.695739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.696141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.696206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.704409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.704803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.713541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.713938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.713993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.722443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.722926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.731335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.731816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.731874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.740268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.740715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.740768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.748906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.749359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.749406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.757716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.758132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.758198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.380 [2024-04-18 09:56:40.766455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.380 [2024-04-18 09:56:40.766882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.380 [2024-04-18 09:56:40.766938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.775077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.775465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.775522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.783618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.784050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.784094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.792140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.792520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.792566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.800585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.800980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.801035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.809056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.809486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.809533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.817756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.818139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.818214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.826187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.826616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.826663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.835033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.835477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.835546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.843835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.844305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.844353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.852835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.853311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.853355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.861408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.861801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.861856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.869785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.870197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.870241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.878222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.878616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.878672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.886680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.887073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.887120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.895099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.895483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.895535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.903635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.904046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.904094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.912396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.912796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.912839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.920991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.921409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.921464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.929669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.930134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.930191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.639 [2024-04-18 09:56:40.938435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.639 [2024-04-18 09:56:40.938865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.639 [2024-04-18 09:56:40.938930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.947035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.947462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.947505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.955595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.955998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.956053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.964007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.964478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.972471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.972900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.972954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.980958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.981398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.981469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.989508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.989929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.989973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:40.998019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:40.998440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:40.998503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:41.006457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:41.006860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:41.006912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:41.014945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:41.015375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:41.015429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.640 [2024-04-18 09:56:41.023452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.640 [2024-04-18 09:56:41.023858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.640 [2024-04-18 09:56:41.023913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.898 [2024-04-18 09:56:41.031818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.898 [2024-04-18 09:56:41.032237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.898 [2024-04-18 09:56:41.032286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.898 [2024-04-18 09:56:41.040284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.898 [2024-04-18 09:56:41.040686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.898 [2024-04-18 09:56:41.040741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.048623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.049047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.049094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.057224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.057651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.057709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.065802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.066248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.066297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.074217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.074625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.074678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.082729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.083187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.083248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.091547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.092015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.092073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.100251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.100677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.100731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.108632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.109047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.109088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.117007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.117476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.125516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.125921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.125981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.133973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.134384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.134435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.142633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.143065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.143144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.151265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.151699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.151737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.159772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.160202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.160251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.168382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.168796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.168835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.176824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.177249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.177305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.185374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.185821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.185859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.193943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.194498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.202356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.202769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.202817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.210819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.211272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.211320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.219426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.219836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.219890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.227858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.228294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.228333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.236420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.236822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.236873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.244954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.245422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.245461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.253497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.253928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.253967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.262018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.262441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.899 [2024-04-18 09:56:41.262493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.899 [2024-04-18 09:56:41.270559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.899 [2024-04-18 09:56:41.271004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.900 [2024-04-18 09:56:41.271044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.900 [2024-04-18 09:56:41.279341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.900 [2024-04-18 09:56:41.279782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.900 [2024-04-18 09:56:41.279842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.900 [2024-04-18 09:56:41.287860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:17.900 [2024-04-18 09:56:41.288321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.900 [2024-04-18 09:56:41.288384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.158 [2024-04-18 09:56:41.296407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.158 [2024-04-18 09:56:41.296819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.158 [2024-04-18 09:56:41.296882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.158 [2024-04-18 09:56:41.304887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.158 [2024-04-18 09:56:41.305312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.158 [2024-04-18 09:56:41.305385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.158 [2024-04-18 09:56:41.313476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.158 [2024-04-18 09:56:41.313897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.158 [2024-04-18 09:56:41.313941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.158 [2024-04-18 09:56:41.321945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.158 [2024-04-18 09:56:41.322369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.158 [2024-04-18 09:56:41.322426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.158 [2024-04-18 09:56:41.330444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.158 [2024-04-18 09:56:41.330859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.158 [2024-04-18 09:56:41.330908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.158 [2024-04-18 09:56:41.338931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.339356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.339409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.347507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.347931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.347971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.356339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.356773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.356821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.365073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.365523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.365571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.373549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.373977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.374016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.382035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.382547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.390563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.390989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.391029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.399114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.399552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.399603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.407838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.408293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.408338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.416635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.417073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.417152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.425349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.425785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.425847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.434109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.434577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.442596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.443075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.451084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.451528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.451567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.459534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.459938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.459987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.467953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.468370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.468415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.476323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.476713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.476762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.484721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.485118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.485183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.493118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.493551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.493596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.501599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.502011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.502063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.510160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.510583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.510623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.518718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.519159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.527198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.527627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.527666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.535885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.536349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.536394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.159 [2024-04-18 09:56:41.544603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.159 [2024-04-18 09:56:41.545040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.159 [2024-04-18 09:56:41.545094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.417 [2024-04-18 09:56:41.553415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.553885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.553937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.562108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.562553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.562613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.570642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.571080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.571122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.579078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.579485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.579529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.587520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.587928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.587967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.595872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.596296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.596335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.604390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.604788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.604840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.612934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.613380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.613423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.621546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.621951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.622012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.629988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.630412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.630460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.638572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.639002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.639066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.647083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.647510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.647561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.655685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.656084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.656137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.664257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.664659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.664712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.672836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.673306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.681377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.681780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.681821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.689882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.690306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.690345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.698338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.698722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.698768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.706738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.707151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.707201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.715222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.715646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.715684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.723666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.724081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.724143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.732177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.732590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.732628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.740759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.741207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.741258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.749224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.749648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.749687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.757679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.758079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.758137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.766117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.766521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.766578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.418 [2024-04-18 09:56:41.774592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.418 [2024-04-18 09:56:41.775021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.418 [2024-04-18 09:56:41.775066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.419 [2024-04-18 09:56:41.783216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.419 [2024-04-18 09:56:41.783642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.419 [2024-04-18 09:56:41.783710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.419 [2024-04-18 09:56:41.791795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.419 [2024-04-18 09:56:41.792237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.419 [2024-04-18 09:56:41.792287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.419 [2024-04-18 09:56:41.800217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.419 [2024-04-18 09:56:41.800623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.419 [2024-04-18 09:56:41.800674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.808637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.809046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.809087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.817109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.817527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.817567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.825543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.825935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.825986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.834015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.834439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.834479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.842567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.842986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.843054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.851015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.851470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.851523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.859715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.860149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.860222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.868409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.868896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.868955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.876949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.877383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.877426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.885448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.677 [2024-04-18 09:56:41.885902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.677 [2024-04-18 09:56:41.893917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.677 [2024-04-18 09:56:41.894360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.894406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.902482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.902931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.902981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.910994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.911427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.911468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.919564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.919971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.920018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.928075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.928519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.928558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.936584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.937007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.937047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.944983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.945388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.945436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.953415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.953836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.953875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.962017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.962460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.962506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.970595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.971026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.971080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.979108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.979535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.979589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.987638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.988041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.988099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:41.996083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:41.996491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:41.996543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.004571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.004961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.005003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.013009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.013426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.013466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.021469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.021864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.021902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.029916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.030326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.030366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.038399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.038799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.038838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.046800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.047223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.047263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.055269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.055669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.055707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-04-18 09:56:42.063845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.678 [2024-04-18 09:56:42.064285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-04-18 09:56:42.064325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.937 [2024-04-18 09:56:42.072403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.937 [2024-04-18 09:56:42.072836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.937 [2024-04-18 09:56:42.072879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.937 [2024-04-18 09:56:42.080726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.937 [2024-04-18 09:56:42.081146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.937 [2024-04-18 09:56:42.081186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.937 [2024-04-18 09:56:42.089107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.937 [2024-04-18 09:56:42.089523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.937 [2024-04-18 09:56:42.089563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.937 [2024-04-18 09:56:42.097541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.937 [2024-04-18 09:56:42.097938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.937 [2024-04-18 09:56:42.097979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.937 [2024-04-18 09:56:42.106025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.106448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.106489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.114515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.114936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.114977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.122997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.123444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.123485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.131720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.132153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.132193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.140335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.140752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.140793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.148933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.149351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.149392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.157553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.157972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.158012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.165982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.166378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.166418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.174318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.174695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.174735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.182769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.183180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.183220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.191280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.191676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.191714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.199789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.200197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.200236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.208199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.208597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.208637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.216651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.217046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.217085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.225022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.225426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.225465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.233436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.233838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.233891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.241970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.242379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.242429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.250335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.250707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.250747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.258783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.259240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.259279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.267553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.267983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.268023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.276228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.276665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.276704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.284924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.285377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.285428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.293512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.293929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.293970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.301872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.302286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.302326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.310269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.310645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.310687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.318626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.319012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.319063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.938 [2024-04-18 09:56:42.327052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:18.938 [2024-04-18 09:56:42.327452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.938 [2024-04-18 09:56:42.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.198 [2024-04-18 09:56:42.335524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.198 [2024-04-18 09:56:42.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.198 [2024-04-18 09:56:42.335942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.198 [2024-04-18 09:56:42.343855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.198 [2024-04-18 09:56:42.344231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.198 [2024-04-18 09:56:42.344271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.198 [2024-04-18 09:56:42.352137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.198 [2024-04-18 09:56:42.352501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.198 [2024-04-18 09:56:42.352540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.198 [2024-04-18 09:56:42.360441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.198 [2024-04-18 09:56:42.360806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.198 [2024-04-18 09:56:42.360847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.368723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.369090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.369141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.377064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.377473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.377513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.385560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.385952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.385993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.393908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.394320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.394362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.402565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.402987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.403035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.411186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.411606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.411660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.419762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.420185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.420235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.428194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.428583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.428625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.436663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.437064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.437104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.445373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.445797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.445837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.454278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.454704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.454747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.463080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.463520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.463563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.471691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.472103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.472161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.480092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.480540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.488587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.489006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.489060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.497095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.497504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.497549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.505714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.506107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.506160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.514141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.514552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.514594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.522567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.522976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.523017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.530955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.531374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.531423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.539557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.539962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.540021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.548118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.548546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.548587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.556884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.199 [2024-04-18 09:56:42.557326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.199 [2024-04-18 09:56:42.557364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.199 [2024-04-18 09:56:42.565788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.200 [2024-04-18 09:56:42.566230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.200 [2024-04-18 09:56:42.566266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.200 [2024-04-18 09:56:42.574579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.200 [2024-04-18 09:56:42.575040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.200 [2024-04-18 09:56:42.575079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.200 [2024-04-18 09:56:42.583083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.200 [2024-04-18 09:56:42.583497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.200 [2024-04-18 09:56:42.583539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.591648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.592106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.592178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.600257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.600717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.600771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.608996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.609382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.609425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.616970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.617090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.617131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.625255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.625373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.625409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.633666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.633805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.633841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.642330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.642473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.642526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.462 [2024-04-18 09:56:42.651392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:28:19.462 [2024-04-18 09:56:42.651594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.462 [2024-04-18 09:56:42.651661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.462 00:28:19.462 Latency(us) 00:28:19.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.462 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:19.462 nvme0n1 : 2.00 3614.56 451.82 0.00 0.00 4415.92 3321.48 12868.89 00:28:19.462 =================================================================================================================== 00:28:19.462 Total : 3614.56 451.82 0.00 0.00 4415.92 3321.48 12868.89 00:28:19.462 0 00:28:19.462 09:56:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.462 09:56:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.462 09:56:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.462 | .driver_specific 00:28:19.462 | .nvme_error 00:28:19.462 | .status_code 00:28:19.462 | .command_transient_transport_error' 00:28:19.462 09:56:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.721 09:56:42 -- host/digest.sh@71 -- # (( 233 > 0 )) 00:28:19.721 09:56:42 -- host/digest.sh@73 -- # killprocess 80163 00:28:19.721 09:56:42 -- common/autotest_common.sh@936 -- # '[' -z 80163 ']' 00:28:19.721 09:56:42 -- common/autotest_common.sh@940 -- # kill -0 80163 00:28:19.721 09:56:42 -- common/autotest_common.sh@941 -- # uname 00:28:19.721 09:56:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:19.721 09:56:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80163 00:28:19.721 killing process with pid 80163 00:28:19.721 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.721 00:28:19.721 Latency(us) 00:28:19.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.721 =================================================================================================================== 00:28:19.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.721 09:56:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:19.721 09:56:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:19.721 09:56:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80163' 00:28:19.721 09:56:42 -- common/autotest_common.sh@955 -- # kill 80163 00:28:19.721 09:56:42 -- common/autotest_common.sh@960 -- # wait 80163 00:28:21.094 09:56:44 -- host/digest.sh@116 -- # killprocess 79926 00:28:21.094 09:56:44 -- common/autotest_common.sh@936 -- # '[' -z 79926 ']' 00:28:21.094 09:56:44 -- common/autotest_common.sh@940 -- # kill -0 79926 00:28:21.094 09:56:44 -- common/autotest_common.sh@941 -- # uname 00:28:21.094 09:56:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:21.094 09:56:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79926 00:28:21.094 killing process with pid 79926 00:28:21.094 09:56:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:21.094 09:56:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:21.094 09:56:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79926' 00:28:21.094 09:56:44 -- common/autotest_common.sh@955 -- # kill 79926 00:28:21.094 09:56:44 -- common/autotest_common.sh@960 -- # wait 79926 00:28:22.028 00:28:22.028 real 0m23.497s 00:28:22.028 user 0m44.350s 00:28:22.028 sys 0m5.051s 00:28:22.028 09:56:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:22.028 09:56:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.028 ************************************ 00:28:22.028 END TEST nvmf_digest_error 00:28:22.028 ************************************ 00:28:22.286 09:56:45 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:22.286 09:56:45 -- host/digest.sh@150 -- # nvmftestfini 00:28:22.286 09:56:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:22.286 09:56:45 -- nvmf/common.sh@117 -- # sync 00:28:22.286 09:56:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.286 09:56:45 -- nvmf/common.sh@120 -- # set +e 00:28:22.286 09:56:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.286 09:56:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.286 rmmod nvme_tcp 00:28:22.286 rmmod nvme_fabrics 00:28:22.286 rmmod nvme_keyring 00:28:22.286 09:56:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.286 09:56:45 -- nvmf/common.sh@124 -- # set -e 00:28:22.286 09:56:45 -- nvmf/common.sh@125 -- # return 0 00:28:22.286 09:56:45 -- nvmf/common.sh@478 -- # '[' -n 79926 ']' 00:28:22.286 09:56:45 -- nvmf/common.sh@479 -- # killprocess 79926 00:28:22.286 09:56:45 -- common/autotest_common.sh@936 -- # '[' -z 79926 ']' 00:28:22.286 09:56:45 -- common/autotest_common.sh@940 -- # kill -0 79926 00:28:22.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79926) - No such process 00:28:22.286 Process with pid 79926 is not found 00:28:22.286 09:56:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79926 is not found' 00:28:22.286 09:56:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:22.286 09:56:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:22.286 09:56:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:22.286 09:56:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.286 09:56:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.286 09:56:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.286 09:56:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.286 09:56:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.286 09:56:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:22.286 00:28:22.286 real 0m49.484s 00:28:22.286 user 1m32.548s 00:28:22.286 sys 0m10.484s 00:28:22.286 09:56:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:22.286 09:56:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.286 ************************************ 00:28:22.286 END TEST nvmf_digest 00:28:22.286 ************************************ 00:28:22.286 09:56:45 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:28:22.286 09:56:45 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:28:22.286 09:56:45 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:22.286 09:56:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:22.286 09:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.286 09:56:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.545 ************************************ 00:28:22.545 START TEST nvmf_multipath 00:28:22.545 ************************************ 00:28:22.545 09:56:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:22.545 * Looking for test storage... 00:28:22.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:22.545 09:56:45 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:22.545 09:56:45 -- nvmf/common.sh@7 -- # uname -s 00:28:22.545 09:56:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.545 09:56:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.545 09:56:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.545 09:56:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.545 09:56:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.545 09:56:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.545 09:56:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.545 09:56:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.545 09:56:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.545 09:56:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.545 09:56:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:28:22.545 09:56:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:28:22.545 09:56:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.545 09:56:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.545 09:56:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:22.545 09:56:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.545 09:56:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:22.545 09:56:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.545 09:56:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.545 09:56:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.545 09:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.545 09:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.545 09:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.545 09:56:45 -- paths/export.sh@5 -- # export PATH 00:28:22.545 09:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.545 09:56:45 -- nvmf/common.sh@47 -- # : 0 00:28:22.545 09:56:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.545 09:56:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.545 09:56:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.545 09:56:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.545 09:56:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.545 09:56:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.545 09:56:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.545 09:56:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.545 09:56:45 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:22.545 09:56:45 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:22.545 09:56:45 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:22.545 09:56:45 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:22.545 09:56:45 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:22.545 09:56:45 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:22.545 09:56:45 -- host/multipath.sh@30 -- # nvmftestinit 00:28:22.545 09:56:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:22.545 09:56:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.545 09:56:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:22.545 09:56:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:22.545 09:56:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:22.545 09:56:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.545 09:56:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.545 09:56:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.545 09:56:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:22.545 09:56:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:22.545 09:56:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:22.545 09:56:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:22.545 09:56:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:22.545 09:56:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:22.545 09:56:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.545 09:56:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.545 09:56:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:22.545 09:56:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:22.545 09:56:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:22.545 09:56:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:22.545 09:56:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:22.545 09:56:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.545 09:56:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:22.545 09:56:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:22.545 09:56:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:22.545 09:56:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:22.545 09:56:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:22.545 09:56:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:22.545 Cannot find device "nvmf_tgt_br" 00:28:22.545 09:56:45 -- nvmf/common.sh@155 -- # true 00:28:22.545 09:56:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:22.545 Cannot find device "nvmf_tgt_br2" 00:28:22.545 09:56:45 -- nvmf/common.sh@156 -- # true 00:28:22.545 09:56:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:22.545 09:56:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:22.545 Cannot find device "nvmf_tgt_br" 00:28:22.545 09:56:45 -- nvmf/common.sh@158 -- # true 00:28:22.545 09:56:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:22.545 Cannot find device "nvmf_tgt_br2" 00:28:22.545 09:56:45 -- nvmf/common.sh@159 -- # true 00:28:22.545 09:56:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:22.545 09:56:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:22.816 09:56:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:22.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:22.816 09:56:45 -- nvmf/common.sh@162 -- # true 00:28:22.817 09:56:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:22.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:22.817 09:56:45 -- nvmf/common.sh@163 -- # true 00:28:22.817 09:56:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:22.817 09:56:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:22.817 09:56:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:22.817 09:56:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:22.817 09:56:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:22.817 09:56:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:22.817 09:56:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:22.817 09:56:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:22.817 09:56:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:22.817 09:56:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:22.817 09:56:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:22.817 09:56:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:22.817 09:56:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:22.817 09:56:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:22.817 09:56:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:22.817 09:56:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:22.817 09:56:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:22.817 09:56:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:22.817 09:56:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:22.817 09:56:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:22.817 09:56:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:22.817 09:56:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:22.817 09:56:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:22.817 09:56:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:22.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:28:22.817 00:28:22.817 --- 10.0.0.2 ping statistics --- 00:28:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.817 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:22.817 09:56:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:22.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:22.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:28:22.817 00:28:22.817 --- 10.0.0.3 ping statistics --- 00:28:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.817 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:22.817 09:56:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:22.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:22.817 00:28:22.817 --- 10.0.0.1 ping statistics --- 00:28:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.817 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:22.817 09:56:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.817 09:56:46 -- nvmf/common.sh@422 -- # return 0 00:28:22.817 09:56:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:22.817 09:56:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.817 09:56:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:22.817 09:56:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:22.817 09:56:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.817 09:56:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:22.817 09:56:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:22.817 09:56:46 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:28:22.817 09:56:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:22.817 09:56:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:22.817 09:56:46 -- common/autotest_common.sh@10 -- # set +x 00:28:22.817 09:56:46 -- nvmf/common.sh@470 -- # nvmfpid=80453 00:28:22.817 09:56:46 -- nvmf/common.sh@471 -- # waitforlisten 80453 00:28:22.817 09:56:46 -- common/autotest_common.sh@817 -- # '[' -z 80453 ']' 00:28:22.817 09:56:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:22.817 09:56:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.817 09:56:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:22.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.817 09:56:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.817 09:56:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:22.817 09:56:46 -- common/autotest_common.sh@10 -- # set +x 00:28:23.075 [2024-04-18 09:56:46.284639] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:23.075 [2024-04-18 09:56:46.284862] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.333 [2024-04-18 09:56:46.472516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:23.592 [2024-04-18 09:56:46.773630] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.592 [2024-04-18 09:56:46.773722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.592 [2024-04-18 09:56:46.773744] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.592 [2024-04-18 09:56:46.773772] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.592 [2024-04-18 09:56:46.773788] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.592 [2024-04-18 09:56:46.773973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.593 [2024-04-18 09:56:46.773991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.851 09:56:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:23.851 09:56:47 -- common/autotest_common.sh@850 -- # return 0 00:28:23.851 09:56:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:23.851 09:56:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:23.851 09:56:47 -- common/autotest_common.sh@10 -- # set +x 00:28:24.110 09:56:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.110 09:56:47 -- host/multipath.sh@33 -- # nvmfapp_pid=80453 00:28:24.110 09:56:47 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:24.368 [2024-04-18 09:56:47.511055] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.368 09:56:47 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:24.625 Malloc0 00:28:24.625 09:56:47 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:24.882 09:56:48 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.140 09:56:48 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.398 [2024-04-18 09:56:48.620841] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.398 09:56:48 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:25.655 [2024-04-18 09:56:48.853026] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:25.655 09:56:48 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:25.655 09:56:48 -- host/multipath.sh@44 -- # bdevperf_pid=80513 00:28:25.655 09:56:48 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:25.655 09:56:48 -- host/multipath.sh@47 -- # waitforlisten 80513 /var/tmp/bdevperf.sock 00:28:25.655 09:56:48 -- common/autotest_common.sh@817 -- # '[' -z 80513 ']' 00:28:25.655 09:56:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:25.655 09:56:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:25.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:25.655 09:56:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:25.655 09:56:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:25.655 09:56:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.648 09:56:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:26.648 09:56:49 -- common/autotest_common.sh@850 -- # return 0 00:28:26.648 09:56:49 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:26.907 09:56:50 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:27.165 Nvme0n1 00:28:27.423 09:56:50 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:27.681 Nvme0n1 00:28:27.681 09:56:50 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:27.681 09:56:50 -- host/multipath.sh@78 -- # sleep 1 00:28:28.616 09:56:51 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:28:28.616 09:56:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:28.875 09:56:52 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:29.133 09:56:52 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:28:29.133 09:56:52 -- host/multipath.sh@65 -- # dtrace_pid=80564 00:28:29.133 09:56:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:29.133 09:56:52 -- host/multipath.sh@66 -- # sleep 6 00:28:35.736 09:56:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:35.736 09:56:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:35.736 09:56:58 -- host/multipath.sh@67 -- # active_port=4421 00:28:35.736 09:56:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:35.736 Attaching 4 probes... 00:28:35.736 @path[10.0.0.2, 4421]: 12560 00:28:35.736 @path[10.0.0.2, 4421]: 12496 00:28:35.736 @path[10.0.0.2, 4421]: 13213 00:28:35.736 @path[10.0.0.2, 4421]: 13267 00:28:35.736 @path[10.0.0.2, 4421]: 13320 00:28:35.736 09:56:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:35.736 09:56:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:35.736 09:56:58 -- host/multipath.sh@69 -- # sed -n 1p 00:28:35.736 09:56:58 -- host/multipath.sh@69 -- # port=4421 00:28:35.736 09:56:58 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:35.736 09:56:58 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:35.736 09:56:58 -- host/multipath.sh@72 -- # kill 80564 00:28:35.736 09:56:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:35.736 09:56:58 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:28:35.736 09:56:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:35.736 09:56:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:35.994 09:56:59 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:28:35.994 09:56:59 -- host/multipath.sh@65 -- # dtrace_pid=80673 00:28:35.994 09:56:59 -- host/multipath.sh@66 -- # sleep 6 00:28:35.994 09:56:59 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:42.552 09:57:05 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:42.552 09:57:05 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:42.552 09:57:05 -- host/multipath.sh@67 -- # active_port=4420 00:28:42.552 09:57:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:42.552 Attaching 4 probes... 00:28:42.552 @path[10.0.0.2, 4420]: 12925 00:28:42.552 @path[10.0.0.2, 4420]: 13195 00:28:42.552 @path[10.0.0.2, 4420]: 13261 00:28:42.552 @path[10.0.0.2, 4420]: 13297 00:28:42.552 @path[10.0.0.2, 4420]: 13131 00:28:42.552 09:57:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:42.552 09:57:05 -- host/multipath.sh@69 -- # sed -n 1p 00:28:42.552 09:57:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:42.552 09:57:05 -- host/multipath.sh@69 -- # port=4420 00:28:42.552 09:57:05 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:42.552 09:57:05 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:42.552 09:57:05 -- host/multipath.sh@72 -- # kill 80673 00:28:42.552 09:57:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:42.552 09:57:05 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:42.552 09:57:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:42.552 09:57:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:42.811 09:57:06 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:42.811 09:57:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:42.811 09:57:06 -- host/multipath.sh@65 -- # dtrace_pid=80786 00:28:42.811 09:57:06 -- host/multipath.sh@66 -- # sleep 6 00:28:49.381 09:57:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:49.381 09:57:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:49.381 09:57:12 -- host/multipath.sh@67 -- # active_port=4421 00:28:49.381 09:57:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:49.381 Attaching 4 probes... 00:28:49.381 @path[10.0.0.2, 4421]: 10302 00:28:49.381 @path[10.0.0.2, 4421]: 13288 00:28:49.381 @path[10.0.0.2, 4421]: 13792 00:28:49.381 @path[10.0.0.2, 4421]: 12995 00:28:49.381 @path[10.0.0.2, 4421]: 13250 00:28:49.381 09:57:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:49.381 09:57:12 -- host/multipath.sh@69 -- # sed -n 1p 00:28:49.381 09:57:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:49.381 09:57:12 -- host/multipath.sh@69 -- # port=4421 00:28:49.381 09:57:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:49.381 09:57:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:49.381 09:57:12 -- host/multipath.sh@72 -- # kill 80786 00:28:49.381 09:57:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:49.381 09:57:12 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:49.381 09:57:12 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:49.381 09:57:12 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:49.640 09:57:12 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:49.640 09:57:12 -- host/multipath.sh@65 -- # dtrace_pid=80904 00:28:49.640 09:57:12 -- host/multipath.sh@66 -- # sleep 6 00:28:49.640 09:57:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:56.205 09:57:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:56.205 09:57:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:56.205 09:57:19 -- host/multipath.sh@67 -- # active_port= 00:28:56.205 09:57:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:56.205 Attaching 4 probes... 00:28:56.205 00:28:56.205 00:28:56.205 00:28:56.205 00:28:56.205 00:28:56.205 09:57:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:56.205 09:57:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:56.205 09:57:19 -- host/multipath.sh@69 -- # sed -n 1p 00:28:56.205 09:57:19 -- host/multipath.sh@69 -- # port= 00:28:56.205 09:57:19 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:56.205 09:57:19 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:56.205 09:57:19 -- host/multipath.sh@72 -- # kill 80904 00:28:56.205 09:57:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:56.205 09:57:19 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:56.205 09:57:19 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:56.205 09:57:19 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:56.463 09:57:19 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:56.463 09:57:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:56.463 09:57:19 -- host/multipath.sh@65 -- # dtrace_pid=81011 00:28:56.463 09:57:19 -- host/multipath.sh@66 -- # sleep 6 00:29:03.026 09:57:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:03.026 09:57:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:03.026 09:57:25 -- host/multipath.sh@67 -- # active_port=4421 00:29:03.026 09:57:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:03.026 Attaching 4 probes... 00:29:03.026 @path[10.0.0.2, 4421]: 13096 00:29:03.026 @path[10.0.0.2, 4421]: 13386 00:29:03.026 @path[10.0.0.2, 4421]: 13029 00:29:03.026 @path[10.0.0.2, 4421]: 12822 00:29:03.026 @path[10.0.0.2, 4421]: 13280 00:29:03.026 09:57:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:03.026 09:57:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:03.026 09:57:25 -- host/multipath.sh@69 -- # sed -n 1p 00:29:03.026 09:57:25 -- host/multipath.sh@69 -- # port=4421 00:29:03.026 09:57:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:03.026 09:57:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:03.026 09:57:25 -- host/multipath.sh@72 -- # kill 81011 00:29:03.026 09:57:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:03.026 09:57:25 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:03.026 [2024-04-18 09:57:26.166886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.166967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.166993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 [2024-04-18 09:57:26.167194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:29:03.026 09:57:26 -- host/multipath.sh@101 -- # sleep 1 00:29:03.975 09:57:27 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:29:03.975 09:57:27 -- host/multipath.sh@65 -- # dtrace_pid=81135 00:29:03.975 09:57:27 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:03.975 09:57:27 -- host/multipath.sh@66 -- # sleep 6 00:29:10.541 09:57:33 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:10.541 09:57:33 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:10.541 09:57:33 -- host/multipath.sh@67 -- # active_port=4420 00:29:10.541 09:57:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:10.541 Attaching 4 probes... 00:29:10.541 @path[10.0.0.2, 4420]: 12081 00:29:10.541 @path[10.0.0.2, 4420]: 12400 00:29:10.541 @path[10.0.0.2, 4420]: 12488 00:29:10.541 @path[10.0.0.2, 4420]: 12285 00:29:10.541 @path[10.0.0.2, 4420]: 12344 00:29:10.541 09:57:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:10.541 09:57:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:10.541 09:57:33 -- host/multipath.sh@69 -- # sed -n 1p 00:29:10.541 09:57:33 -- host/multipath.sh@69 -- # port=4420 00:29:10.541 09:57:33 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:10.541 09:57:33 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:10.542 09:57:33 -- host/multipath.sh@72 -- # kill 81135 00:29:10.542 09:57:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:10.542 09:57:33 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:10.542 [2024-04-18 09:57:33.704894] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:10.542 09:57:33 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:10.801 09:57:33 -- host/multipath.sh@111 -- # sleep 6 00:29:17.363 09:57:39 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:29:17.363 09:57:39 -- host/multipath.sh@65 -- # dtrace_pid=81304 00:29:17.363 09:57:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80453 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:17.363 09:57:39 -- host/multipath.sh@66 -- # sleep 6 00:29:22.633 09:57:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:22.633 09:57:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:22.891 09:57:46 -- host/multipath.sh@67 -- # active_port=4421 00:29:22.891 09:57:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:22.891 Attaching 4 probes... 00:29:22.891 @path[10.0.0.2, 4421]: 12095 00:29:22.891 @path[10.0.0.2, 4421]: 12070 00:29:22.891 @path[10.0.0.2, 4421]: 12118 00:29:22.891 @path[10.0.0.2, 4421]: 12385 00:29:22.891 @path[10.0.0.2, 4421]: 12613 00:29:22.891 09:57:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:22.891 09:57:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:22.891 09:57:46 -- host/multipath.sh@69 -- # sed -n 1p 00:29:22.891 09:57:46 -- host/multipath.sh@69 -- # port=4421 00:29:22.891 09:57:46 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:22.891 09:57:46 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:22.891 09:57:46 -- host/multipath.sh@72 -- # kill 81304 00:29:22.891 09:57:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:22.891 09:57:46 -- host/multipath.sh@114 -- # killprocess 80513 00:29:22.891 09:57:46 -- common/autotest_common.sh@936 -- # '[' -z 80513 ']' 00:29:22.891 09:57:46 -- common/autotest_common.sh@940 -- # kill -0 80513 00:29:22.891 09:57:46 -- common/autotest_common.sh@941 -- # uname 00:29:22.891 09:57:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:22.891 09:57:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80513 00:29:23.151 killing process with pid 80513 00:29:23.151 09:57:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:23.151 09:57:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:23.151 09:57:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80513' 00:29:23.151 09:57:46 -- common/autotest_common.sh@955 -- # kill 80513 00:29:23.151 09:57:46 -- common/autotest_common.sh@960 -- # wait 80513 00:29:23.724 Connection closed with partial response: 00:29:23.724 00:29:23.724 00:29:24.301 09:57:47 -- host/multipath.sh@116 -- # wait 80513 00:29:24.301 09:57:47 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:24.301 [2024-04-18 09:56:48.965846] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:24.301 [2024-04-18 09:56:48.966060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80513 ] 00:29:24.301 [2024-04-18 09:56:49.135774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.301 [2024-04-18 09:56:49.415373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.301 Running I/O for 90 seconds... 00:29:24.301 [2024-04-18 09:56:59.190292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.190948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.190969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.191537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.191964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.191994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.192406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.192954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.192975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.193028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.301 [2024-04-18 09:56:59.193092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.193167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.193221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.193274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.301 [2024-04-18 09:56:59.193327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:24.301 [2024-04-18 09:56:59.193357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.193378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.193430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.193481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.193532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.193953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.193984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.194709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.194781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.194833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.194901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.194967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.194998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.195019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.195082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.195164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.195218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.195966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.195987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.196039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.196090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.196158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.196953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.196973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.198936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.302 [2024-04-18 09:56:59.198978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.302 [2024-04-18 09:56:59.199491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:24.302 [2024-04-18 09:56:59.199525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:56:59.199546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.799776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.799865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.799971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.800381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.800958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.800979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.801032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.801083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.801134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.801203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.801255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.801959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.801979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.303 [2024-04-18 09:57:05.802819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.802947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.802980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.303 [2024-04-18 09:57:05.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:24.303 [2024-04-18 09:57:05.803706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.803727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.803782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.803808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.803840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.803861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.804002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.804928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.804958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.804979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.805581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.805921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.805950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.806990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:05.807028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.807954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.807991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:05.808509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:05.808536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.830990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.304 [2024-04-18 09:57:12.831556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:12.831604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:12.831660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:12.831709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:24.304 [2024-04-18 09:57:12.831737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.304 [2024-04-18 09:57:12.831757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.831786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.831805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.831834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.831865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.831897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.831917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.831946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.831966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.831995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.832386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.832959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.832990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.833826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.833964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.833985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.305 [2024-04-18 09:57:12.834907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.834963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.834987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.305 [2024-04-18 09:57:12.835570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:24.305 [2024-04-18 09:57:12.835600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.835620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.835671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.835721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.835772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.835821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.835871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.835921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.835959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.835980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.836401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.836755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.836775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.837729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:12.837767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.837837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.837874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.837894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.837932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.837952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.837988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.838922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.838976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.839004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.839041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.839071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.839109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.839145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:12.839186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:12.839208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.167415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.167528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.167572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.167611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.306 [2024-04-18 09:57:26.167954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.167974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.167993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.306 [2024-04-18 09:57:26.168546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.306 [2024-04-18 09:57:26.168567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.168587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.168627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.168666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.168705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.168975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.168994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.169378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.169974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.169993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.170385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.170948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.170980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.171067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.171115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.171174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.307 [2024-04-18 09:57:26.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.307 [2024-04-18 09:57:26.171812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.307 [2024-04-18 09:57:26.171831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.171852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.171870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.171911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.171932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.171951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.171972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.171990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.172031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.172071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.172174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.308 [2024-04-18 09:57:26.172235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.308 [2024-04-18 09:57:26.172849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.172869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007c40 is same with the state(5) to be set 00:29:24.308 [2024-04-18 09:57:26.172901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:24.308 [2024-04-18 09:57:26.172919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:24.308 [2024-04-18 09:57:26.172942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120672 len:8 PRP1 0x0 PRP2 0x0 00:29:24.308 [2024-04-18 09:57:26.172962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.173248] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007c40 was disconnected and freed. reset controller. 00:29:24.308 [2024-04-18 09:57:26.173396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.308 [2024-04-18 09:57:26.173429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.173453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.308 [2024-04-18 09:57:26.173472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.173492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.308 [2024-04-18 09:57:26.173511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.173530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.308 [2024-04-18 09:57:26.173549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.308 [2024-04-18 09:57:26.173567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006c40 is same with the state(5) to be set 00:29:24.308 [2024-04-18 09:57:26.175003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.308 [2024-04-18 09:57:26.175063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006c40 (9): Bad file descriptor 00:29:24.308 [2024-04-18 09:57:26.175602] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.308 [2024-04-18 09:57:26.175698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.308 [2024-04-18 09:57:26.175764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.308 [2024-04-18 09:57:26.175813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006c40 with addr=10.0.0.2, port=4421 00:29:24.308 [2024-04-18 09:57:26.175840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006c40 is same with the state(5) to be set 00:29:24.308 [2024-04-18 09:57:26.175892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006c40 (9): Bad file descriptor 00:29:24.308 [2024-04-18 09:57:26.175938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.308 [2024-04-18 09:57:26.175963] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.308 [2024-04-18 09:57:26.175985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.308 [2024-04-18 09:57:26.176035] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.308 [2024-04-18 09:57:26.176060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.308 [2024-04-18 09:57:36.253492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:24.308 Received shutdown signal, test time was about 55.328226 seconds 00:29:24.308 00:29:24.308 Latency(us) 00:29:24.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.308 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:24.308 Verification LBA range: start 0x0 length 0x4000 00:29:24.308 Nvme0n1 : 55.33 5484.22 21.42 0.00 0.00 23310.71 266.24 7046430.72 00:29:24.308 =================================================================================================================== 00:29:24.308 Total : 5484.22 21.42 0.00 0.00 23310.71 266.24 7046430.72 00:29:24.308 09:57:47 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.308 09:57:47 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:29:24.308 09:57:47 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:24.308 09:57:47 -- host/multipath.sh@125 -- # nvmftestfini 00:29:24.308 09:57:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:24.308 09:57:47 -- nvmf/common.sh@117 -- # sync 00:29:24.308 09:57:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:24.308 09:57:47 -- nvmf/common.sh@120 -- # set +e 00:29:24.308 09:57:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.308 09:57:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:24.308 rmmod nvme_tcp 00:29:24.567 rmmod nvme_fabrics 00:29:24.567 rmmod nvme_keyring 00:29:24.567 09:57:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.567 09:57:47 -- nvmf/common.sh@124 -- # set -e 00:29:24.567 09:57:47 -- nvmf/common.sh@125 -- # return 0 00:29:24.567 09:57:47 -- nvmf/common.sh@478 -- # '[' -n 80453 ']' 00:29:24.567 09:57:47 -- nvmf/common.sh@479 -- # killprocess 80453 00:29:24.567 09:57:47 -- common/autotest_common.sh@936 -- # '[' -z 80453 ']' 00:29:24.567 09:57:47 -- common/autotest_common.sh@940 -- # kill -0 80453 00:29:24.567 09:57:47 -- common/autotest_common.sh@941 -- # uname 00:29:24.567 09:57:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:24.567 09:57:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80453 00:29:24.567 09:57:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:24.567 09:57:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:24.567 killing process with pid 80453 00:29:24.567 09:57:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80453' 00:29:24.567 09:57:47 -- common/autotest_common.sh@955 -- # kill 80453 00:29:24.567 09:57:47 -- common/autotest_common.sh@960 -- # wait 80453 00:29:25.944 09:57:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:25.944 09:57:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:25.944 09:57:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:25.944 09:57:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.944 09:57:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:25.944 09:57:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.944 09:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.944 09:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.944 09:57:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:25.944 00:29:25.944 real 1m3.445s 00:29:25.944 user 2m55.496s 00:29:25.944 sys 0m17.434s 00:29:25.944 ************************************ 00:29:25.944 END TEST nvmf_multipath 00:29:25.944 ************************************ 00:29:25.944 09:57:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:25.944 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:29:25.944 09:57:49 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:25.944 09:57:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:25.944 09:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:25.944 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:29:25.944 ************************************ 00:29:25.944 START TEST nvmf_timeout 00:29:25.944 ************************************ 00:29:25.944 09:57:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:26.203 * Looking for test storage... 00:29:26.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:26.203 09:57:49 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:26.203 09:57:49 -- nvmf/common.sh@7 -- # uname -s 00:29:26.203 09:57:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.203 09:57:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.203 09:57:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.203 09:57:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.203 09:57:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.203 09:57:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.203 09:57:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.203 09:57:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.203 09:57:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.203 09:57:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.203 09:57:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:29:26.203 09:57:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:29:26.203 09:57:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.203 09:57:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.203 09:57:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:26.203 09:57:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.203 09:57:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:26.203 09:57:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.203 09:57:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.203 09:57:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.203 09:57:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.203 09:57:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.203 09:57:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.203 09:57:49 -- paths/export.sh@5 -- # export PATH 00:29:26.203 09:57:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.203 09:57:49 -- nvmf/common.sh@47 -- # : 0 00:29:26.203 09:57:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:26.203 09:57:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:26.203 09:57:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.203 09:57:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.203 09:57:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.203 09:57:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:26.203 09:57:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:26.203 09:57:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:26.203 09:57:49 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:26.203 09:57:49 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:26.203 09:57:49 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:26.203 09:57:49 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:26.203 09:57:49 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:26.203 09:57:49 -- host/timeout.sh@19 -- # nvmftestinit 00:29:26.203 09:57:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:26.203 09:57:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.203 09:57:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:26.203 09:57:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:26.203 09:57:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:26.203 09:57:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.203 09:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:26.203 09:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.203 09:57:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:26.203 09:57:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:26.203 09:57:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:26.203 09:57:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:26.203 09:57:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:26.203 09:57:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:26.203 09:57:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.203 09:57:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.203 09:57:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:26.203 09:57:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:26.203 09:57:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:26.203 09:57:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:26.203 09:57:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:26.203 09:57:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.203 09:57:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:26.203 09:57:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:26.203 09:57:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:26.203 09:57:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:26.203 09:57:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:26.203 09:57:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:26.203 Cannot find device "nvmf_tgt_br" 00:29:26.203 09:57:49 -- nvmf/common.sh@155 -- # true 00:29:26.203 09:57:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:26.203 Cannot find device "nvmf_tgt_br2" 00:29:26.203 09:57:49 -- nvmf/common.sh@156 -- # true 00:29:26.203 09:57:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:26.203 09:57:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:26.203 Cannot find device "nvmf_tgt_br" 00:29:26.203 09:57:49 -- nvmf/common.sh@158 -- # true 00:29:26.203 09:57:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:26.203 Cannot find device "nvmf_tgt_br2" 00:29:26.203 09:57:49 -- nvmf/common.sh@159 -- # true 00:29:26.203 09:57:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:26.203 09:57:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:26.203 09:57:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:26.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:26.203 09:57:49 -- nvmf/common.sh@162 -- # true 00:29:26.203 09:57:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:26.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:26.203 09:57:49 -- nvmf/common.sh@163 -- # true 00:29:26.203 09:57:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:26.203 09:57:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:26.203 09:57:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:26.203 09:57:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:26.203 09:57:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:26.203 09:57:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:26.203 09:57:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:26.203 09:57:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:26.203 09:57:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:26.203 09:57:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:26.203 09:57:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:26.462 09:57:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:26.462 09:57:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:26.462 09:57:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:26.462 09:57:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:26.462 09:57:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:26.462 09:57:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:26.462 09:57:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:26.462 09:57:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:26.462 09:57:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:26.462 09:57:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:26.462 09:57:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:26.462 09:57:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:26.462 09:57:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:26.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:29:26.462 00:29:26.462 --- 10.0.0.2 ping statistics --- 00:29:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.462 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:26.462 09:57:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:26.462 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:26.462 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:29:26.462 00:29:26.462 --- 10.0.0.3 ping statistics --- 00:29:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.462 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:26.462 09:57:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:26.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:29:26.462 00:29:26.462 --- 10.0.0.1 ping statistics --- 00:29:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.462 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:29:26.462 09:57:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.462 09:57:49 -- nvmf/common.sh@422 -- # return 0 00:29:26.462 09:57:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:26.462 09:57:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.462 09:57:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:26.462 09:57:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:26.462 09:57:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.462 09:57:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:26.462 09:57:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:26.462 09:57:49 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:29:26.462 09:57:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:26.462 09:57:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:26.462 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.462 09:57:49 -- nvmf/common.sh@470 -- # nvmfpid=81642 00:29:26.462 09:57:49 -- nvmf/common.sh@471 -- # waitforlisten 81642 00:29:26.462 09:57:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:26.462 09:57:49 -- common/autotest_common.sh@817 -- # '[' -z 81642 ']' 00:29:26.462 09:57:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.462 09:57:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:26.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.462 09:57:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.462 09:57:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:26.462 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.462 [2024-04-18 09:57:49.835706] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:26.462 [2024-04-18 09:57:49.836483] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.771 [2024-04-18 09:57:50.010582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:27.043 [2024-04-18 09:57:50.272833] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.043 [2024-04-18 09:57:50.272895] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.043 [2024-04-18 09:57:50.272917] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.043 [2024-04-18 09:57:50.272945] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.043 [2024-04-18 09:57:50.272962] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.043 [2024-04-18 09:57:50.273141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.043 [2024-04-18 09:57:50.273186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.609 09:57:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:27.610 09:57:50 -- common/autotest_common.sh@850 -- # return 0 00:29:27.610 09:57:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:27.610 09:57:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:27.610 09:57:50 -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 09:57:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.610 09:57:50 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:27.610 09:57:50 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:27.868 [2024-04-18 09:57:51.011707] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.868 09:57:51 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:28.126 Malloc0 00:29:28.126 09:57:51 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.384 09:57:51 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:28.642 09:57:51 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.642 [2024-04-18 09:57:52.019309] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.900 09:57:52 -- host/timeout.sh@32 -- # bdevperf_pid=81691 00:29:28.900 09:57:52 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:28.900 09:57:52 -- host/timeout.sh@34 -- # waitforlisten 81691 /var/tmp/bdevperf.sock 00:29:28.900 09:57:52 -- common/autotest_common.sh@817 -- # '[' -z 81691 ']' 00:29:28.900 09:57:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.900 09:57:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:28.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.900 09:57:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.900 09:57:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:28.900 09:57:52 -- common/autotest_common.sh@10 -- # set +x 00:29:28.900 [2024-04-18 09:57:52.126856] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:28.900 [2024-04-18 09:57:52.127030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81691 ] 00:29:28.900 [2024-04-18 09:57:52.289790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.158 [2024-04-18 09:57:52.526493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.730 09:57:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:29.730 09:57:53 -- common/autotest_common.sh@850 -- # return 0 00:29:29.730 09:57:53 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:30.003 09:57:53 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:30.274 NVMe0n1 00:29:30.274 09:57:53 -- host/timeout.sh@51 -- # rpc_pid=81715 00:29:30.274 09:57:53 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:30.274 09:57:53 -- host/timeout.sh@53 -- # sleep 1 00:29:30.549 Running I/O for 10 seconds... 00:29:31.484 09:57:54 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.745 [2024-04-18 09:57:54.899516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.899973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.899990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.745 [2024-04-18 09:57:54.900297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.745 [2024-04-18 09:57:54.900313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.746 [2024-04-18 09:57:54.900330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.746 [2024-04-18 09:57:54.900363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.746 [2024-04-18 09:57:54.900396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.900925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.746 [2024-04-18 09:57:54.900958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.900975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.746 [2024-04-18 09:57:54.900995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.746 [2024-04-18 09:57:54.901318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.901997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.746 [2024-04-18 09:57:54.902983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.746 [2024-04-18 09:57:54.902999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.903981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.903998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.904015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.904060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.747 [2024-04-18 09:57:54.904093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:29:31.747 [2024-04-18 09:57:54.904148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.747 [2024-04-18 09:57:54.904164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.747 [2024-04-18 09:57:54.904185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52176 len:8 PRP1 0x0 PRP2 0x0 00:29:31.747 [2024-04-18 09:57:54.904200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904471] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:29:31.747 [2024-04-18 09:57:54.904621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.747 [2024-04-18 09:57:54.904646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.747 [2024-04-18 09:57:54.904681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.747 [2024-04-18 09:57:54.904711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.747 [2024-04-18 09:57:54.904741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.747 [2024-04-18 09:57:54.904757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:29:31.747 [2024-04-18 09:57:54.905012] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.747 [2024-04-18 09:57:54.905083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:29:31.747 [2024-04-18 09:57:54.905265] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-04-18 09:57:54.905358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-04-18 09:57:54.905444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-04-18 09:57:54.905471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005040 with addr=10.0.0.2, port=4420 00:29:31.747 [2024-04-18 09:57:54.905495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:29:31.747 [2024-04-18 09:57:54.905526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:29:31.747 [2024-04-18 09:57:54.905557] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.747 [2024-04-18 09:57:54.905573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.747 [2024-04-18 09:57:54.905592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.747 [2024-04-18 09:57:54.905626] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.747 [2024-04-18 09:57:54.905648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.747 09:57:54 -- host/timeout.sh@56 -- # sleep 2 00:29:33.651 [2024-04-18 09:57:56.905880] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-04-18 09:57:56.906031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-04-18 09:57:56.906098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-04-18 09:57:56.906139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005040 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-04-18 09:57:56.906169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:29:33.651 [2024-04-18 09:57:56.906215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:29:33.651 [2024-04-18 09:57:56.906267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.651 [2024-04-18 09:57:56.906286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.651 [2024-04-18 09:57:56.906307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.651 [2024-04-18 09:57:56.906351] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.651 [2024-04-18 09:57:56.906373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.651 09:57:56 -- host/timeout.sh@57 -- # get_controller 00:29:33.651 09:57:56 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:33.651 09:57:56 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:33.910 09:57:57 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:29:33.910 09:57:57 -- host/timeout.sh@58 -- # get_bdev 00:29:33.910 09:57:57 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:33.910 09:57:57 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:34.168 09:57:57 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:29:34.168 09:57:57 -- host/timeout.sh@61 -- # sleep 5 00:29:35.544 [2024-04-18 09:57:58.906617] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.544 [2024-04-18 09:57:58.906762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.544 [2024-04-18 09:57:58.906828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.544 [2024-04-18 09:57:58.906855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005040 with addr=10.0.0.2, port=4420 00:29:35.544 [2024-04-18 09:57:58.906881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:29:35.544 [2024-04-18 09:57:58.906924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:29:35.544 [2024-04-18 09:57:58.906958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.544 [2024-04-18 09:57:58.906974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.544 [2024-04-18 09:57:58.906997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.544 [2024-04-18 09:57:58.907065] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.544 [2024-04-18 09:57:58.907089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.110 [2024-04-18 09:58:00.907211] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.680 00:29:38.680 Latency(us) 00:29:38.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.680 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:38.680 Verification LBA range: start 0x0 length 0x4000 00:29:38.680 NVMe0n1 : 8.21 781.90 3.05 15.59 0.00 160369.05 4676.89 7046430.72 00:29:38.680 =================================================================================================================== 00:29:38.680 Total : 781.90 3.05 15.59 0.00 160369.05 4676.89 7046430.72 00:29:38.680 0 00:29:39.247 09:58:02 -- host/timeout.sh@62 -- # get_controller 00:29:39.247 09:58:02 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:39.247 09:58:02 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:39.506 09:58:02 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:29:39.506 09:58:02 -- host/timeout.sh@63 -- # get_bdev 00:29:39.506 09:58:02 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:39.506 09:58:02 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:39.765 09:58:03 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:29:39.765 09:58:03 -- host/timeout.sh@65 -- # wait 81715 00:29:39.765 09:58:03 -- host/timeout.sh@67 -- # killprocess 81691 00:29:39.765 09:58:03 -- common/autotest_common.sh@936 -- # '[' -z 81691 ']' 00:29:39.765 09:58:03 -- common/autotest_common.sh@940 -- # kill -0 81691 00:29:39.765 09:58:03 -- common/autotest_common.sh@941 -- # uname 00:29:39.765 09:58:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:39.765 09:58:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81691 00:29:39.765 09:58:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:39.765 09:58:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:39.765 killing process with pid 81691 00:29:39.765 09:58:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81691' 00:29:39.765 Received shutdown signal, test time was about 9.344244 seconds 00:29:39.765 00:29:39.765 Latency(us) 00:29:39.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.765 =================================================================================================================== 00:29:39.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.765 09:58:03 -- common/autotest_common.sh@955 -- # kill 81691 00:29:39.765 09:58:03 -- common/autotest_common.sh@960 -- # wait 81691 00:29:41.143 09:58:04 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.143 [2024-04-18 09:58:04.489379] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.143 09:58:04 -- host/timeout.sh@74 -- # bdevperf_pid=81844 00:29:41.143 09:58:04 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:41.143 09:58:04 -- host/timeout.sh@76 -- # waitforlisten 81844 /var/tmp/bdevperf.sock 00:29:41.143 09:58:04 -- common/autotest_common.sh@817 -- # '[' -z 81844 ']' 00:29:41.143 09:58:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:41.143 09:58:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:41.143 09:58:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:41.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:41.143 09:58:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:41.143 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:29:41.402 [2024-04-18 09:58:04.617996] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:41.402 [2024-04-18 09:58:04.618243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81844 ] 00:29:41.402 [2024-04-18 09:58:04.793152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.969 [2024-04-18 09:58:05.072857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.226 09:58:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:42.226 09:58:05 -- common/autotest_common.sh@850 -- # return 0 00:29:42.226 09:58:05 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:42.485 09:58:05 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:29:42.742 NVMe0n1 00:29:42.742 09:58:06 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:42.742 09:58:06 -- host/timeout.sh@84 -- # rpc_pid=81868 00:29:42.742 09:58:06 -- host/timeout.sh@86 -- # sleep 1 00:29:43.002 Running I/O for 10 seconds... 00:29:43.950 09:58:07 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.212 [2024-04-18 09:58:07.352127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:44.212 [2024-04-18 09:58:07.352635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.352985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.352998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.212 [2024-04-18 09:58:07.353563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.212 [2024-04-18 09:58:07.353792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.212 [2024-04-18 09:58:07.353806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.353825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.353839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.353859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.353893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.353907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.353926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.353941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.353974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.354472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.354967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.354988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.213 [2024-04-18 09:58:07.355028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.355081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.355116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.355170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.355205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.213 [2024-04-18 09:58:07.355238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.213 [2024-04-18 09:58:07.355258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.355908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.355974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.355993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.356052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.356089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.356133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.356178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.214 [2024-04-18 09:58:07.356222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.214 [2024-04-18 09:58:07.356700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-04-18 09:58:07.356714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.356980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.357024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.357078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.357113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-04-18 09:58:07.357159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.215 [2024-04-18 09:58:07.357415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007040 is same with the state(5) to be set 00:29:44.215 [2024-04-18 09:58:07.357456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:44.215 [2024-04-18 09:58:07.357472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:44.215 [2024-04-18 09:58:07.357490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50752 len:8 PRP1 0x0 PRP2 0x0 00:29:44.215 [2024-04-18 09:58:07.357512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.357798] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:29:44.215 [2024-04-18 09:58:07.358003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.215 [2024-04-18 09:58:07.358045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.358063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.215 [2024-04-18 09:58:07.358080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.358095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.215 [2024-04-18 09:58:07.358111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.358143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.215 [2024-04-18 09:58:07.358164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.215 [2024-04-18 09:58:07.358177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:44.215 [2024-04-18 09:58:07.358456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.215 [2024-04-18 09:58:07.358512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:44.215 [2024-04-18 09:58:07.358672] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.215 [2024-04-18 09:58:07.358763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.215 [2024-04-18 09:58:07.358838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.215 [2024-04-18 09:58:07.358871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:29:44.215 [2024-04-18 09:58:07.358893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:44.215 [2024-04-18 09:58:07.358929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:44.215 [2024-04-18 09:58:07.358957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.215 [2024-04-18 09:58:07.358980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.215 [2024-04-18 09:58:07.358999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.215 [2024-04-18 09:58:07.359073] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.215 [2024-04-18 09:58:07.359093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.215 09:58:07 -- host/timeout.sh@90 -- # sleep 1 00:29:45.151 [2024-04-18 09:58:08.359331] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.151 [2024-04-18 09:58:08.359474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.151 [2024-04-18 09:58:08.359540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.151 [2024-04-18 09:58:08.359571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:29:45.151 [2024-04-18 09:58:08.359594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:45.151 [2024-04-18 09:58:08.359657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:45.151 [2024-04-18 09:58:08.359686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.151 [2024-04-18 09:58:08.359705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.151 [2024-04-18 09:58:08.359722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.151 [2024-04-18 09:58:08.359771] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.151 [2024-04-18 09:58:08.359797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 09:58:08 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.409 [2024-04-18 09:58:08.633499] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.409 09:58:08 -- host/timeout.sh@92 -- # wait 81868 00:29:46.344 [2024-04-18 09:58:09.373633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:52.958 00:29:52.958 Latency(us) 00:29:52.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.958 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:52.958 Verification LBA range: start 0x0 length 0x4000 00:29:52.958 NVMe0n1 : 10.01 4647.49 18.15 0.00 0.00 27505.96 1995.87 3035150.89 00:29:52.958 =================================================================================================================== 00:29:52.958 Total : 4647.49 18.15 0.00 0.00 27505.96 1995.87 3035150.89 00:29:52.958 0 00:29:52.958 09:58:16 -- host/timeout.sh@97 -- # rpc_pid=81973 00:29:52.958 09:58:16 -- host/timeout.sh@98 -- # sleep 1 00:29:52.958 09:58:16 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:52.958 Running I/O for 10 seconds... 00:29:53.894 09:58:17 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.204 [2024-04-18 09:58:17.494646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.494989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.495003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.495017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:54.204 [2024-04-18 09:58:17.495101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.204 [2024-04-18 09:58:17.495875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.204 [2024-04-18 09:58:17.495933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.204 [2024-04-18 09:58:17.495956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.204 [2024-04-18 09:58:17.495974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.495990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.205 [2024-04-18 09:58:17.496742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.496980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.496996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.205 [2024-04-18 09:58:17.497362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.205 [2024-04-18 09:58:17.497376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.497678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.497982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.497998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.206 [2024-04-18 09:58:17.498514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.206 [2024-04-18 09:58:17.498624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.206 [2024-04-18 09:58:17.498638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.498962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.498988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.499001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.499030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.207 [2024-04-18 09:58:17.499070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.207 [2024-04-18 09:58:17.499287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009e40 is same with the state(5) to be set 00:29:54.207 [2024-04-18 09:58:17.499321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:54.207 [2024-04-18 09:58:17.499333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:54.207 [2024-04-18 09:58:17.499346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:29:54.207 [2024-04-18 09:58:17.499359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499630] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009e40 was disconnected and freed. reset controller. 00:29:54.207 [2024-04-18 09:58:17.499753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.207 [2024-04-18 09:58:17.499787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.207 [2024-04-18 09:58:17.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.207 [2024-04-18 09:58:17.499847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.207 [2024-04-18 09:58:17.499874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.207 [2024-04-18 09:58:17.499886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:54.207 [2024-04-18 09:58:17.500153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.207 [2024-04-18 09:58:17.500200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:54.207 [2024-04-18 09:58:17.500331] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-04-18 09:58:17.500409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-04-18 09:58:17.500491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-04-18 09:58:17.500524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:29:54.207 [2024-04-18 09:58:17.500541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:54.207 [2024-04-18 09:58:17.500571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:54.207 [2024-04-18 09:58:17.500597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.207 [2024-04-18 09:58:17.500612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.207 [2024-04-18 09:58:17.500626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.207 [2024-04-18 09:58:17.500657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.207 [2024-04-18 09:58:17.500674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.207 09:58:17 -- host/timeout.sh@101 -- # sleep 3 00:29:55.145 [2024-04-18 09:58:18.500892] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.145 [2024-04-18 09:58:18.501030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.145 [2024-04-18 09:58:18.501089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.145 [2024-04-18 09:58:18.501115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:29:55.145 [2024-04-18 09:58:18.501152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:55.145 [2024-04-18 09:58:18.501197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:55.145 [2024-04-18 09:58:18.501226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.145 [2024-04-18 09:58:18.501242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.145 [2024-04-18 09:58:18.501257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.145 [2024-04-18 09:58:18.501298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.145 [2024-04-18 09:58:18.501316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.524 [2024-04-18 09:58:19.501547] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.524 [2024-04-18 09:58:19.501684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.524 [2024-04-18 09:58:19.501742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.524 [2024-04-18 09:58:19.501770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:29:56.524 [2024-04-18 09:58:19.501791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:56.524 [2024-04-18 09:58:19.501831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:56.524 [2024-04-18 09:58:19.501861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.524 [2024-04-18 09:58:19.501877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.524 [2024-04-18 09:58:19.501892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.524 [2024-04-18 09:58:19.501933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.524 [2024-04-18 09:58:19.501951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.462 [2024-04-18 09:58:20.505206] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-04-18 09:58:20.505339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-04-18 09:58:20.505400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-04-18 09:58:20.505425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:29:57.462 [2024-04-18 09:58:20.505446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:29:57.462 [2024-04-18 09:58:20.505735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:29:57.462 [2024-04-18 09:58:20.506009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.462 [2024-04-18 09:58:20.506039] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.462 [2024-04-18 09:58:20.506056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.462 [2024-04-18 09:58:20.510023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.462 [2024-04-18 09:58:20.510064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.462 09:58:20 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.462 [2024-04-18 09:58:20.766946] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.462 09:58:20 -- host/timeout.sh@103 -- # wait 81973 00:29:58.410 [2024-04-18 09:58:21.546426] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:03.685 00:30:03.685 Latency(us) 00:30:03.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.685 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:03.685 Verification LBA range: start 0x0 length 0x4000 00:30:03.685 NVMe0n1 : 10.01 4212.22 16.45 3515.15 0.00 16525.69 755.90 3019898.88 00:30:03.685 =================================================================================================================== 00:30:03.685 Total : 4212.22 16.45 3515.15 0.00 16525.69 0.00 3019898.88 00:30:03.685 0 00:30:03.685 09:58:26 -- host/timeout.sh@105 -- # killprocess 81844 00:30:03.685 09:58:26 -- common/autotest_common.sh@936 -- # '[' -z 81844 ']' 00:30:03.685 09:58:26 -- common/autotest_common.sh@940 -- # kill -0 81844 00:30:03.685 09:58:26 -- common/autotest_common.sh@941 -- # uname 00:30:03.685 09:58:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:03.685 09:58:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81844 00:30:03.685 killing process with pid 81844 00:30:03.685 Received shutdown signal, test time was about 10.000000 seconds 00:30:03.685 00:30:03.685 Latency(us) 00:30:03.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.685 =================================================================================================================== 00:30:03.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:03.685 09:58:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:30:03.685 09:58:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:30:03.685 09:58:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81844' 00:30:03.685 09:58:26 -- common/autotest_common.sh@955 -- # kill 81844 00:30:03.685 09:58:26 -- common/autotest_common.sh@960 -- # wait 81844 00:30:04.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.252 09:58:27 -- host/timeout.sh@110 -- # bdevperf_pid=82094 00:30:04.252 09:58:27 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:30:04.252 09:58:27 -- host/timeout.sh@112 -- # waitforlisten 82094 /var/tmp/bdevperf.sock 00:30:04.252 09:58:27 -- common/autotest_common.sh@817 -- # '[' -z 82094 ']' 00:30:04.252 09:58:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.252 09:58:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:04.252 09:58:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.252 09:58:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:04.252 09:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:04.252 [2024-04-18 09:58:27.544189] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:04.252 [2024-04-18 09:58:27.544916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82094 ] 00:30:04.510 [2024-04-18 09:58:27.720559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.767 [2024-04-18 09:58:27.959194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.333 09:58:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:05.333 09:58:28 -- common/autotest_common.sh@850 -- # return 0 00:30:05.333 09:58:28 -- host/timeout.sh@116 -- # dtrace_pid=82111 00:30:05.333 09:58:28 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:30:05.333 09:58:28 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:30:05.591 09:58:28 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:05.849 NVMe0n1 00:30:05.849 09:58:29 -- host/timeout.sh@124 -- # rpc_pid=82151 00:30:05.849 09:58:29 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:05.849 09:58:29 -- host/timeout.sh@125 -- # sleep 1 00:30:06.108 Running I/O for 10 seconds... 00:30:07.048 09:58:30 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.048 [2024-04-18 09:58:30.421182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.048 [2024-04-18 09:58:30.421369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.421998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.049 [2024-04-18 09:58:30.422323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.049 [2024-04-18 09:58:30.422349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.049 [2024-04-18 09:58:30.422375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.049 [2024-04-18 09:58:30.422388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.049 [2024-04-18 09:58:30.422402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.049 [2024-04-18 09:58:30.422423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.049 [2024-04-18 09:58:30.422428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-04-18 09:58:30.422440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same id:0 cdw10:00000000 cdw11:00000000 00:30:07.050 with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.422466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.422984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:30:07.050 [2024-04-18 09:58:30.423095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.050 [2024-04-18 09:58:30.423609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.050 [2024-04-18 09:58:30.423623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.423964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.423987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.424970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.424989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.051 [2024-04-18 09:58:30.425004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.051 [2024-04-18 09:58:30.425033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.425970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.425988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.052 [2024-04-18 09:58:30.426368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.052 [2024-04-18 09:58:30.426383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.426975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.426994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.053 [2024-04-18 09:58:30.427380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.053 [2024-04-18 09:58:30.427394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.054 [2024-04-18 09:58:30.427668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.427686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:30:07.054 [2024-04-18 09:58:30.427706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:07.054 [2024-04-18 09:58:30.427722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:07.054 [2024-04-18 09:58:30.427736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30400 len:8 PRP1 0x0 PRP2 0x0 00:30:07.054 [2024-04-18 09:58:30.427761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.054 [2024-04-18 09:58:30.428043] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:30:07.054 [2024-04-18 09:58:30.428393] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:07.054 [2024-04-18 09:58:30.428439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:30:07.054 [2024-04-18 09:58:30.428607] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.054 [2024-04-18 09:58:30.428706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.054 [2024-04-18 09:58:30.428770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.054 [2024-04-18 09:58:30.428799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005040 with addr=10.0.0.2, port=4420 00:30:07.054 [2024-04-18 09:58:30.428825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:30:07.054 [2024-04-18 09:58:30.428866] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:30:07.054 [2024-04-18 09:58:30.428893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:07.054 [2024-04-18 09:58:30.428911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:07.054 [2024-04-18 09:58:30.428928] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:07.054 [2024-04-18 09:58:30.428963] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.054 [2024-04-18 09:58:30.428981] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:07.313 09:58:30 -- host/timeout.sh@128 -- # wait 82151 00:30:09.219 [2024-04-18 09:58:32.429288] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.219 [2024-04-18 09:58:32.429433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.219 [2024-04-18 09:58:32.429506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.219 [2024-04-18 09:58:32.429540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005040 with addr=10.0.0.2, port=4420 00:30:09.219 [2024-04-18 09:58:32.429563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:30:09.219 [2024-04-18 09:58:32.429631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:30:09.219 [2024-04-18 09:58:32.429675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.219 [2024-04-18 09:58:32.429700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.219 [2024-04-18 09:58:32.429716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.219 [2024-04-18 09:58:32.429763] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.219 [2024-04-18 09:58:32.429781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.138 [2024-04-18 09:58:34.430023] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.138 [2024-04-18 09:58:34.430172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.138 [2024-04-18 09:58:34.430241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.138 [2024-04-18 09:58:34.430272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005040 with addr=10.0.0.2, port=4420 00:30:11.138 [2024-04-18 09:58:34.430295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005040 is same with the state(5) to be set 00:30:11.138 [2024-04-18 09:58:34.430342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005040 (9): Bad file descriptor 00:30:11.138 [2024-04-18 09:58:34.430390] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.138 [2024-04-18 09:58:34.430413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.138 [2024-04-18 09:58:34.430430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.138 [2024-04-18 09:58:34.430476] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.138 [2024-04-18 09:58:34.430495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.708 [2024-04-18 09:58:36.430627] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.273 00:30:14.273 Latency(us) 00:30:14.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.273 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:30:14.273 NVMe0n1 : 8.15 1527.52 5.97 15.71 0.00 83050.31 11021.96 7046430.72 00:30:14.273 =================================================================================================================== 00:30:14.273 Total : 1527.52 5.97 15.71 0.00 83050.31 11021.96 7046430.72 00:30:14.274 0 00:30:14.274 09:58:37 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:14.274 Attaching 5 probes... 00:30:14.274 1336.030926: reset bdev controller NVMe0 00:30:14.274 1336.147258: reconnect bdev controller NVMe0 00:30:14.274 3336.705882: reconnect delay bdev controller NVMe0 00:30:14.274 3336.752363: reconnect bdev controller NVMe0 00:30:14.274 5337.507832: reconnect delay bdev controller NVMe0 00:30:14.274 5337.535909: reconnect bdev controller NVMe0 00:30:14.274 7338.214398: reconnect delay bdev controller NVMe0 00:30:14.274 7338.239659: reconnect bdev controller NVMe0 00:30:14.274 09:58:37 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:30:14.274 09:58:37 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:30:14.274 09:58:37 -- host/timeout.sh@136 -- # kill 82111 00:30:14.274 09:58:37 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:14.274 09:58:37 -- host/timeout.sh@139 -- # killprocess 82094 00:30:14.274 09:58:37 -- common/autotest_common.sh@936 -- # '[' -z 82094 ']' 00:30:14.274 09:58:37 -- common/autotest_common.sh@940 -- # kill -0 82094 00:30:14.274 09:58:37 -- common/autotest_common.sh@941 -- # uname 00:30:14.274 09:58:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:14.274 09:58:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82094 00:30:14.274 killing process with pid 82094 00:30:14.274 Received shutdown signal, test time was about 8.207477 seconds 00:30:14.274 00:30:14.274 Latency(us) 00:30:14.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.274 =================================================================================================================== 00:30:14.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.274 09:58:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:30:14.274 09:58:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:30:14.274 09:58:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82094' 00:30:14.274 09:58:37 -- common/autotest_common.sh@955 -- # kill 82094 00:30:14.274 09:58:37 -- common/autotest_common.sh@960 -- # wait 82094 00:30:15.651 09:58:38 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.651 09:58:38 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:30:15.651 09:58:38 -- host/timeout.sh@145 -- # nvmftestfini 00:30:15.651 09:58:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:15.651 09:58:38 -- nvmf/common.sh@117 -- # sync 00:30:15.651 09:58:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:15.651 09:58:38 -- nvmf/common.sh@120 -- # set +e 00:30:15.651 09:58:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:15.651 09:58:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:15.651 rmmod nvme_tcp 00:30:15.651 rmmod nvme_fabrics 00:30:15.651 rmmod nvme_keyring 00:30:15.651 09:58:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:15.651 09:58:38 -- nvmf/common.sh@124 -- # set -e 00:30:15.651 09:58:38 -- nvmf/common.sh@125 -- # return 0 00:30:15.651 09:58:38 -- nvmf/common.sh@478 -- # '[' -n 81642 ']' 00:30:15.651 09:58:38 -- nvmf/common.sh@479 -- # killprocess 81642 00:30:15.651 09:58:38 -- common/autotest_common.sh@936 -- # '[' -z 81642 ']' 00:30:15.651 09:58:38 -- common/autotest_common.sh@940 -- # kill -0 81642 00:30:15.651 09:58:38 -- common/autotest_common.sh@941 -- # uname 00:30:15.651 09:58:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:15.651 09:58:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81642 00:30:15.651 killing process with pid 81642 00:30:15.651 09:58:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:15.651 09:58:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:15.651 09:58:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81642' 00:30:15.651 09:58:39 -- common/autotest_common.sh@955 -- # kill 81642 00:30:15.651 09:58:39 -- common/autotest_common.sh@960 -- # wait 81642 00:30:17.557 09:58:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:17.557 09:58:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:17.557 09:58:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:17.557 09:58:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.557 09:58:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:17.557 09:58:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.557 09:58:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.557 09:58:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.557 09:58:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:17.557 ************************************ 00:30:17.557 END TEST nvmf_timeout 00:30:17.557 ************************************ 00:30:17.557 00:30:17.557 real 0m51.270s 00:30:17.557 user 2m28.871s 00:30:17.557 sys 0m5.923s 00:30:17.557 09:58:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.557 09:58:40 -- common/autotest_common.sh@10 -- # set +x 00:30:17.557 09:58:40 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:30:17.557 09:58:40 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:30:17.557 09:58:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:17.557 09:58:40 -- common/autotest_common.sh@10 -- # set +x 00:30:17.557 09:58:40 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:30:17.557 00:30:17.557 real 10m10.493s 00:30:17.557 user 24m0.948s 00:30:17.557 sys 2m25.109s 00:30:17.557 09:58:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.557 09:58:40 -- common/autotest_common.sh@10 -- # set +x 00:30:17.557 ************************************ 00:30:17.557 END TEST nvmf_tcp 00:30:17.557 ************************************ 00:30:17.557 09:58:40 -- spdk/autotest.sh@286 -- # [[ 1 -eq 0 ]] 00:30:17.557 09:58:40 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:17.557 09:58:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:17.557 09:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:17.557 09:58:40 -- common/autotest_common.sh@10 -- # set +x 00:30:17.557 ************************************ 00:30:17.557 START TEST nvmf_dif 00:30:17.557 ************************************ 00:30:17.557 09:58:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:17.557 * Looking for test storage... 00:30:17.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:17.557 09:58:40 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:17.557 09:58:40 -- nvmf/common.sh@7 -- # uname -s 00:30:17.557 09:58:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.557 09:58:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.557 09:58:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.557 09:58:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.557 09:58:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.557 09:58:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.557 09:58:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.557 09:58:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.557 09:58:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.557 09:58:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.557 09:58:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:30:17.557 09:58:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:30:17.557 09:58:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.557 09:58:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.557 09:58:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:17.557 09:58:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.557 09:58:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:17.557 09:58:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.557 09:58:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.557 09:58:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.557 09:58:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.557 09:58:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.557 09:58:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.557 09:58:40 -- paths/export.sh@5 -- # export PATH 00:30:17.557 09:58:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.557 09:58:40 -- nvmf/common.sh@47 -- # : 0 00:30:17.557 09:58:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:17.557 09:58:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:17.557 09:58:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.557 09:58:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.557 09:58:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.557 09:58:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:17.557 09:58:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:17.557 09:58:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:17.557 09:58:40 -- target/dif.sh@15 -- # NULL_META=16 00:30:17.557 09:58:40 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:17.557 09:58:40 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:17.557 09:58:40 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:17.557 09:58:40 -- target/dif.sh@135 -- # nvmftestinit 00:30:17.557 09:58:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:17.557 09:58:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.557 09:58:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:17.557 09:58:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:17.557 09:58:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:17.557 09:58:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.557 09:58:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:17.557 09:58:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.557 09:58:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:17.558 09:58:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:17.558 09:58:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:17.558 09:58:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:17.558 09:58:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:17.558 09:58:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:17.558 09:58:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.558 09:58:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.558 09:58:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:17.558 09:58:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:17.558 09:58:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:17.558 09:58:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:17.558 09:58:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:17.558 09:58:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.558 09:58:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:17.558 09:58:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:17.558 09:58:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:17.558 09:58:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:17.558 09:58:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:17.558 09:58:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:17.558 Cannot find device "nvmf_tgt_br" 00:30:17.558 09:58:40 -- nvmf/common.sh@155 -- # true 00:30:17.558 09:58:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:17.558 Cannot find device "nvmf_tgt_br2" 00:30:17.558 09:58:40 -- nvmf/common.sh@156 -- # true 00:30:17.558 09:58:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:17.558 09:58:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:17.558 Cannot find device "nvmf_tgt_br" 00:30:17.558 09:58:40 -- nvmf/common.sh@158 -- # true 00:30:17.558 09:58:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:17.558 Cannot find device "nvmf_tgt_br2" 00:30:17.558 09:58:40 -- nvmf/common.sh@159 -- # true 00:30:17.558 09:58:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:17.815 09:58:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:17.815 09:58:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:17.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:17.815 09:58:41 -- nvmf/common.sh@162 -- # true 00:30:17.815 09:58:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:17.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:17.815 09:58:41 -- nvmf/common.sh@163 -- # true 00:30:17.815 09:58:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:17.815 09:58:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:17.815 09:58:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:17.815 09:58:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:17.815 09:58:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:17.815 09:58:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:17.815 09:58:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:17.815 09:58:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:17.815 09:58:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:17.815 09:58:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:17.815 09:58:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:17.815 09:58:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:17.815 09:58:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:17.815 09:58:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:17.815 09:58:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:17.815 09:58:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:17.815 09:58:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:17.815 09:58:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:17.815 09:58:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:17.815 09:58:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:17.815 09:58:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:17.815 09:58:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:17.815 09:58:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:17.815 09:58:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:17.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:30:17.815 00:30:17.815 --- 10.0.0.2 ping statistics --- 00:30:17.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.815 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:17.815 09:58:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:17.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:17.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:30:17.815 00:30:17.815 --- 10.0.0.3 ping statistics --- 00:30:17.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.815 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:30:17.815 09:58:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:17.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:30:17.815 00:30:17.815 --- 10.0.0.1 ping statistics --- 00:30:17.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.815 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:30:17.815 09:58:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.815 09:58:41 -- nvmf/common.sh@422 -- # return 0 00:30:17.815 09:58:41 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:17.815 09:58:41 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:18.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:18.386 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:18.386 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:18.386 09:58:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.386 09:58:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:18.386 09:58:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:18.386 09:58:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.386 09:58:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:18.386 09:58:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:18.386 09:58:41 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:18.386 09:58:41 -- target/dif.sh@137 -- # nvmfappstart 00:30:18.386 09:58:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:18.386 09:58:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:18.386 09:58:41 -- common/autotest_common.sh@10 -- # set +x 00:30:18.386 09:58:41 -- nvmf/common.sh@470 -- # nvmfpid=82617 00:30:18.386 09:58:41 -- nvmf/common.sh@471 -- # waitforlisten 82617 00:30:18.386 09:58:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:18.386 09:58:41 -- common/autotest_common.sh@817 -- # '[' -z 82617 ']' 00:30:18.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.386 09:58:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.386 09:58:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:18.386 09:58:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.386 09:58:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:18.386 09:58:41 -- common/autotest_common.sh@10 -- # set +x 00:30:18.386 [2024-04-18 09:58:41.704939] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:18.386 [2024-04-18 09:58:41.705151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.645 [2024-04-18 09:58:41.888490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.904 [2024-04-18 09:58:42.143435] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.904 [2024-04-18 09:58:42.143522] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.904 [2024-04-18 09:58:42.143545] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.904 [2024-04-18 09:58:42.143576] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.904 [2024-04-18 09:58:42.143592] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.904 [2024-04-18 09:58:42.143637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.472 09:58:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:19.472 09:58:42 -- common/autotest_common.sh@850 -- # return 0 00:30:19.472 09:58:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:19.472 09:58:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 09:58:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.472 09:58:42 -- target/dif.sh@139 -- # create_transport 00:30:19.472 09:58:42 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:19.472 09:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 [2024-04-18 09:58:42.701573] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.472 09:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.472 09:58:42 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:19.472 09:58:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:19.472 09:58:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 ************************************ 00:30:19.472 START TEST fio_dif_1_default 00:30:19.472 ************************************ 00:30:19.472 09:58:42 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:30:19.472 09:58:42 -- target/dif.sh@86 -- # create_subsystems 0 00:30:19.472 09:58:42 -- target/dif.sh@28 -- # local sub 00:30:19.472 09:58:42 -- target/dif.sh@30 -- # for sub in "$@" 00:30:19.472 09:58:42 -- target/dif.sh@31 -- # create_subsystem 0 00:30:19.472 09:58:42 -- target/dif.sh@18 -- # local sub_id=0 00:30:19.472 09:58:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:19.472 09:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 bdev_null0 00:30:19.472 09:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.472 09:58:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:19.472 09:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 09:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.472 09:58:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:19.472 09:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 09:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.472 09:58:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.472 09:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.472 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:30:19.472 [2024-04-18 09:58:42.813785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.472 09:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.472 09:58:42 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:19.472 09:58:42 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:19.472 09:58:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:19.472 09:58:42 -- nvmf/common.sh@521 -- # config=() 00:30:19.472 09:58:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.472 09:58:42 -- nvmf/common.sh@521 -- # local subsystem config 00:30:19.472 09:58:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:19.472 09:58:42 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.472 09:58:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:19.472 { 00:30:19.472 "params": { 00:30:19.472 "name": "Nvme$subsystem", 00:30:19.472 "trtype": "$TEST_TRANSPORT", 00:30:19.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.472 "adrfam": "ipv4", 00:30:19.472 "trsvcid": "$NVMF_PORT", 00:30:19.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.472 "hdgst": ${hdgst:-false}, 00:30:19.472 "ddgst": ${ddgst:-false} 00:30:19.472 }, 00:30:19.472 "method": "bdev_nvme_attach_controller" 00:30:19.472 } 00:30:19.472 EOF 00:30:19.472 )") 00:30:19.472 09:58:42 -- target/dif.sh@82 -- # gen_fio_conf 00:30:19.472 09:58:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:19.472 09:58:42 -- target/dif.sh@54 -- # local file 00:30:19.472 09:58:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.472 09:58:42 -- target/dif.sh@56 -- # cat 00:30:19.472 09:58:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:19.472 09:58:42 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:19.472 09:58:42 -- common/autotest_common.sh@1327 -- # shift 00:30:19.472 09:58:42 -- nvmf/common.sh@543 -- # cat 00:30:19.472 09:58:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:19.472 09:58:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.472 09:58:42 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:19.472 09:58:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:19.472 09:58:42 -- target/dif.sh@72 -- # (( file <= files )) 00:30:19.472 09:58:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:19.472 09:58:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:19.472 09:58:42 -- nvmf/common.sh@545 -- # jq . 00:30:19.472 09:58:42 -- nvmf/common.sh@546 -- # IFS=, 00:30:19.472 09:58:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:19.473 "params": { 00:30:19.473 "name": "Nvme0", 00:30:19.473 "trtype": "tcp", 00:30:19.473 "traddr": "10.0.0.2", 00:30:19.473 "adrfam": "ipv4", 00:30:19.473 "trsvcid": "4420", 00:30:19.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.473 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:19.473 "hdgst": false, 00:30:19.473 "ddgst": false 00:30:19.473 }, 00:30:19.473 "method": "bdev_nvme_attach_controller" 00:30:19.473 }' 00:30:19.473 09:58:42 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:19.473 09:58:42 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:19.473 09:58:42 -- common/autotest_common.sh@1333 -- # break 00:30:19.473 09:58:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:19.473 09:58:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.731 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:19.731 fio-3.35 00:30:19.731 Starting 1 thread 00:30:31.993 00:30:31.993 filename0: (groupid=0, jobs=1): err= 0: pid=82685: Thu Apr 18 09:58:53 2024 00:30:31.993 read: IOPS=6348, BW=24.8MiB/s (26.0MB/s)(248MiB/10001msec) 00:30:31.993 slat (nsec): min=7656, max=99879, avg=11634.75, stdev=4625.46 00:30:31.993 clat (usec): min=453, max=2428, avg=595.32, stdev=45.35 00:30:31.993 lat (usec): min=461, max=2441, avg=606.95, stdev=46.51 00:30:31.993 clat percentiles (usec): 00:30:31.993 | 1.00th=[ 510], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 562], 00:30:31.993 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 603], 00:30:31.993 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 644], 95.00th=[ 668], 00:30:31.993 | 99.00th=[ 717], 99.50th=[ 750], 99.90th=[ 865], 99.95th=[ 906], 00:30:31.993 | 99.99th=[ 1549] 00:30:31.993 bw ( KiB/s): min=24416, max=26368, per=99.83%, avg=25349.05, stdev=616.85, samples=19 00:30:31.993 iops : min= 6104, max= 6592, avg=6337.26, stdev=154.21, samples=19 00:30:31.993 lat (usec) : 500=0.58%, 750=98.95%, 1000=0.45% 00:30:31.993 lat (msec) : 2=0.02%, 4=0.01% 00:30:31.993 cpu : usr=86.57%, sys=11.55%, ctx=17, majf=0, minf=1076 00:30:31.993 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.993 issued rwts: total=63488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.993 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:31.993 00:30:31.993 Run status group 0 (all jobs): 00:30:31.993 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=248MiB (260MB), run=10001-10001msec 00:30:31.993 ----------------------------------------------------- 00:30:31.993 Suppressions used: 00:30:31.993 count bytes template 00:30:31.993 1 8 /usr/src/fio/parse.c 00:30:31.993 1 8 libtcmalloc_minimal.so 00:30:31.993 1 904 libcrypto.so 00:30:31.993 ----------------------------------------------------- 00:30:31.993 00:30:31.993 09:58:55 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:31.993 09:58:55 -- target/dif.sh@43 -- # local sub 00:30:31.993 09:58:55 -- target/dif.sh@45 -- # for sub in "$@" 00:30:31.993 09:58:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:31.993 09:58:55 -- target/dif.sh@36 -- # local sub_id=0 00:30:31.993 09:58:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 ************************************ 00:30:31.993 END TEST fio_dif_1_default 00:30:31.993 ************************************ 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 00:30:31.993 real 0m12.367s 00:30:31.993 user 0m10.536s 00:30:31.993 sys 0m1.568s 00:30:31.993 09:58:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:31.993 09:58:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:31.993 09:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 ************************************ 00:30:31.993 START TEST fio_dif_1_multi_subsystems 00:30:31.993 ************************************ 00:30:31.993 09:58:55 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:30:31.993 09:58:55 -- target/dif.sh@92 -- # local files=1 00:30:31.993 09:58:55 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:31.993 09:58:55 -- target/dif.sh@28 -- # local sub 00:30:31.993 09:58:55 -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.993 09:58:55 -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.993 09:58:55 -- target/dif.sh@18 -- # local sub_id=0 00:30:31.993 09:58:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 bdev_null0 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 [2024-04-18 09:58:55.295194] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.993 09:58:55 -- target/dif.sh@31 -- # create_subsystem 1 00:30:31.993 09:58:55 -- target/dif.sh@18 -- # local sub_id=1 00:30:31.993 09:58:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 bdev_null1 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.993 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.993 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:30:31.993 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.993 09:58:55 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:31.993 09:58:55 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:31.993 09:58:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:31.993 09:58:55 -- nvmf/common.sh@521 -- # config=() 00:30:31.993 09:58:55 -- nvmf/common.sh@521 -- # local subsystem config 00:30:31.993 09:58:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.993 09:58:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:31.993 09:58:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:31.993 { 00:30:31.993 "params": { 00:30:31.993 "name": "Nvme$subsystem", 00:30:31.993 "trtype": "$TEST_TRANSPORT", 00:30:31.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.993 "adrfam": "ipv4", 00:30:31.993 "trsvcid": "$NVMF_PORT", 00:30:31.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.993 "hdgst": ${hdgst:-false}, 00:30:31.993 "ddgst": ${ddgst:-false} 00:30:31.993 }, 00:30:31.993 "method": "bdev_nvme_attach_controller" 00:30:31.993 } 00:30:31.993 EOF 00:30:31.993 )") 00:30:31.993 09:58:55 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.993 09:58:55 -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.993 09:58:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:31.993 09:58:55 -- target/dif.sh@54 -- # local file 00:30:31.993 09:58:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.993 09:58:55 -- target/dif.sh@56 -- # cat 00:30:31.993 09:58:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:31.993 09:58:55 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:31.993 09:58:55 -- nvmf/common.sh@543 -- # cat 00:30:31.993 09:58:55 -- common/autotest_common.sh@1327 -- # shift 00:30:31.993 09:58:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:31.993 09:58:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.993 09:58:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.993 09:58:55 -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.993 09:58:55 -- target/dif.sh@73 -- # cat 00:30:31.993 09:58:55 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:31.993 09:58:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:31.993 09:58:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:31.993 09:58:55 -- target/dif.sh@72 -- # (( file++ )) 00:30:31.993 09:58:55 -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.993 09:58:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:31.994 09:58:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:31.994 { 00:30:31.994 "params": { 00:30:31.994 "name": "Nvme$subsystem", 00:30:31.994 "trtype": "$TEST_TRANSPORT", 00:30:31.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.994 "adrfam": "ipv4", 00:30:31.994 "trsvcid": "$NVMF_PORT", 00:30:31.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.994 "hdgst": ${hdgst:-false}, 00:30:31.994 "ddgst": ${ddgst:-false} 00:30:31.994 }, 00:30:31.994 "method": "bdev_nvme_attach_controller" 00:30:31.994 } 00:30:31.994 EOF 00:30:31.994 )") 00:30:31.994 09:58:55 -- nvmf/common.sh@543 -- # cat 00:30:31.994 09:58:55 -- nvmf/common.sh@545 -- # jq . 00:30:31.994 09:58:55 -- nvmf/common.sh@546 -- # IFS=, 00:30:31.994 09:58:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:31.994 "params": { 00:30:31.994 "name": "Nvme0", 00:30:31.994 "trtype": "tcp", 00:30:31.994 "traddr": "10.0.0.2", 00:30:31.994 "adrfam": "ipv4", 00:30:31.994 "trsvcid": "4420", 00:30:31.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.994 "hdgst": false, 00:30:31.994 "ddgst": false 00:30:31.994 }, 00:30:31.994 "method": "bdev_nvme_attach_controller" 00:30:31.994 },{ 00:30:31.994 "params": { 00:30:31.994 "name": "Nvme1", 00:30:31.994 "trtype": "tcp", 00:30:31.994 "traddr": "10.0.0.2", 00:30:31.994 "adrfam": "ipv4", 00:30:31.994 "trsvcid": "4420", 00:30:31.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.994 "hdgst": false, 00:30:31.994 "ddgst": false 00:30:31.994 }, 00:30:31.994 "method": "bdev_nvme_attach_controller" 00:30:31.994 }' 00:30:31.994 09:58:55 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:31.994 09:58:55 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:31.994 09:58:55 -- common/autotest_common.sh@1333 -- # break 00:30:31.994 09:58:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:31.994 09:58:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.253 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:32.253 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:32.253 fio-3.35 00:30:32.253 Starting 2 threads 00:30:44.505 00:30:44.505 filename0: (groupid=0, jobs=1): err= 0: pid=82852: Thu Apr 18 09:59:06 2024 00:30:44.505 read: IOPS=3654, BW=14.3MiB/s (15.0MB/s)(143MiB/10001msec) 00:30:44.505 slat (usec): min=8, max=164, avg=15.67, stdev= 5.07 00:30:44.505 clat (usec): min=828, max=2219, avg=1051.30, stdev=62.75 00:30:44.505 lat (usec): min=844, max=2236, avg=1066.97, stdev=63.98 00:30:44.505 clat percentiles (usec): 00:30:44.505 | 1.00th=[ 914], 5.00th=[ 955], 10.00th=[ 971], 20.00th=[ 1004], 00:30:44.505 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1057], 00:30:44.505 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1156], 00:30:44.505 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1401], 00:30:44.505 | 99.99th=[ 2180] 00:30:44.505 bw ( KiB/s): min=14272, max=14880, per=49.96%, avg=14607.16, stdev=211.56, samples=19 00:30:44.505 iops : min= 3568, max= 3720, avg=3651.79, stdev=52.89, samples=19 00:30:44.505 lat (usec) : 1000=18.42% 00:30:44.505 lat (msec) : 2=81.57%, 4=0.01% 00:30:44.505 cpu : usr=90.60%, sys=7.97%, ctx=91, majf=0, minf=1076 00:30:44.505 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.506 issued rwts: total=36548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.506 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:44.506 filename1: (groupid=0, jobs=1): err= 0: pid=82853: Thu Apr 18 09:59:06 2024 00:30:44.506 read: IOPS=3654, BW=14.3MiB/s (15.0MB/s)(143MiB/10001msec) 00:30:44.506 slat (usec): min=6, max=159, avg=16.10, stdev= 5.39 00:30:44.506 clat (usec): min=609, max=2288, avg=1049.02, stdev=53.10 00:30:44.506 lat (usec): min=618, max=2305, avg=1065.11, stdev=53.96 00:30:44.506 clat percentiles (usec): 00:30:44.506 | 1.00th=[ 938], 5.00th=[ 971], 10.00th=[ 988], 20.00th=[ 1012], 00:30:44.506 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1045], 60.00th=[ 1057], 00:30:44.506 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:30:44.506 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1418], 00:30:44.506 | 99.99th=[ 2245] 00:30:44.506 bw ( KiB/s): min=14240, max=14880, per=49.96%, avg=14607.16, stdev=215.82, samples=19 00:30:44.506 iops : min= 3560, max= 3720, avg=3651.79, stdev=53.95, samples=19 00:30:44.506 lat (usec) : 750=0.02%, 1000=14.29% 00:30:44.506 lat (msec) : 2=85.68%, 4=0.01% 00:30:44.506 cpu : usr=90.83%, sys=7.70%, ctx=20, majf=0, minf=1076 00:30:44.506 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.506 issued rwts: total=36548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.506 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:44.506 00:30:44.506 Run status group 0 (all jobs): 00:30:44.506 READ: bw=28.5MiB/s (29.9MB/s), 14.3MiB/s-14.3MiB/s (15.0MB/s-15.0MB/s), io=286MiB (299MB), run=10001-10001msec 00:30:44.765 ----------------------------------------------------- 00:30:44.765 Suppressions used: 00:30:44.765 count bytes template 00:30:44.765 2 16 /usr/src/fio/parse.c 00:30:44.765 1 8 libtcmalloc_minimal.so 00:30:44.765 1 904 libcrypto.so 00:30:44.765 ----------------------------------------------------- 00:30:44.765 00:30:44.765 09:59:07 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:44.765 09:59:07 -- target/dif.sh@43 -- # local sub 00:30:44.765 09:59:07 -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.765 09:59:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:44.765 09:59:07 -- target/dif.sh@36 -- # local sub_id=0 00:30:44.765 09:59:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.765 09:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:07 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 09:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:44.765 09:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:07 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 09:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:07 -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.765 09:59:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:44.765 09:59:07 -- target/dif.sh@36 -- # local sub_id=1 00:30:44.765 09:59:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.765 09:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:07 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 09:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:44.765 09:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:07 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 ************************************ 00:30:44.765 END TEST fio_dif_1_multi_subsystems 00:30:44.765 ************************************ 00:30:44.765 09:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 00:30:44.765 real 0m12.710s 00:30:44.765 user 0m20.335s 00:30:44.765 sys 0m2.020s 00:30:44.765 09:59:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:44.765 09:59:07 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 09:59:08 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:44.765 09:59:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.765 09:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.765 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 ************************************ 00:30:44.765 START TEST fio_dif_rand_params 00:30:44.765 ************************************ 00:30:44.765 09:59:08 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:30:44.765 09:59:08 -- target/dif.sh@100 -- # local NULL_DIF 00:30:44.765 09:59:08 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:44.765 09:59:08 -- target/dif.sh@103 -- # NULL_DIF=3 00:30:44.765 09:59:08 -- target/dif.sh@103 -- # bs=128k 00:30:44.765 09:59:08 -- target/dif.sh@103 -- # numjobs=3 00:30:44.765 09:59:08 -- target/dif.sh@103 -- # iodepth=3 00:30:44.765 09:59:08 -- target/dif.sh@103 -- # runtime=5 00:30:44.765 09:59:08 -- target/dif.sh@105 -- # create_subsystems 0 00:30:44.765 09:59:08 -- target/dif.sh@28 -- # local sub 00:30:44.765 09:59:08 -- target/dif.sh@30 -- # for sub in "$@" 00:30:44.765 09:59:08 -- target/dif.sh@31 -- # create_subsystem 0 00:30:44.765 09:59:08 -- target/dif.sh@18 -- # local sub_id=0 00:30:44.765 09:59:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:44.765 09:59:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 bdev_null0 00:30:44.765 09:59:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:44.765 09:59:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 09:59:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:44.765 09:59:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 09:59:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:44.765 09:59:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.765 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 [2024-04-18 09:59:08.125329] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.765 09:59:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.765 09:59:08 -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:44.765 09:59:08 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:44.765 09:59:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:44.765 09:59:08 -- nvmf/common.sh@521 -- # config=() 00:30:44.765 09:59:08 -- nvmf/common.sh@521 -- # local subsystem config 00:30:44.765 09:59:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.765 09:59:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:44.765 09:59:08 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.765 09:59:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:44.765 { 00:30:44.765 "params": { 00:30:44.765 "name": "Nvme$subsystem", 00:30:44.765 "trtype": "$TEST_TRANSPORT", 00:30:44.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.765 "adrfam": "ipv4", 00:30:44.765 "trsvcid": "$NVMF_PORT", 00:30:44.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.765 "hdgst": ${hdgst:-false}, 00:30:44.765 "ddgst": ${ddgst:-false} 00:30:44.765 }, 00:30:44.765 "method": "bdev_nvme_attach_controller" 00:30:44.765 } 00:30:44.765 EOF 00:30:44.765 )") 00:30:44.765 09:59:08 -- target/dif.sh@82 -- # gen_fio_conf 00:30:44.765 09:59:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:44.765 09:59:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.765 09:59:08 -- target/dif.sh@54 -- # local file 00:30:44.765 09:59:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:44.765 09:59:08 -- target/dif.sh@56 -- # cat 00:30:44.765 09:59:08 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:44.765 09:59:08 -- common/autotest_common.sh@1327 -- # shift 00:30:44.765 09:59:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:44.765 09:59:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.765 09:59:08 -- nvmf/common.sh@543 -- # cat 00:30:44.765 09:59:08 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:44.765 09:59:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:44.765 09:59:08 -- target/dif.sh@72 -- # (( file <= files )) 00:30:44.765 09:59:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:44.765 09:59:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:44.765 09:59:08 -- nvmf/common.sh@545 -- # jq . 00:30:44.765 09:59:08 -- nvmf/common.sh@546 -- # IFS=, 00:30:44.765 09:59:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:44.765 "params": { 00:30:44.765 "name": "Nvme0", 00:30:44.765 "trtype": "tcp", 00:30:44.765 "traddr": "10.0.0.2", 00:30:44.765 "adrfam": "ipv4", 00:30:44.765 "trsvcid": "4420", 00:30:44.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:44.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:44.765 "hdgst": false, 00:30:44.765 "ddgst": false 00:30:44.765 }, 00:30:44.765 "method": "bdev_nvme_attach_controller" 00:30:44.765 }' 00:30:45.025 09:59:08 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:45.025 09:59:08 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:45.025 09:59:08 -- common/autotest_common.sh@1333 -- # break 00:30:45.025 09:59:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:45.025 09:59:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.025 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:45.025 ... 00:30:45.025 fio-3.35 00:30:45.025 Starting 3 threads 00:30:51.586 00:30:51.586 filename0: (groupid=0, jobs=1): err= 0: pid=83023: Thu Apr 18 09:59:14 2024 00:30:51.586 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(135MiB/5001msec) 00:30:51.586 slat (nsec): min=5599, max=65341, avg=20061.84, stdev=7657.98 00:30:51.586 clat (usec): min=13259, max=14778, avg=13804.25, stdev=173.81 00:30:51.586 lat (usec): min=13274, max=14809, avg=13824.31, stdev=174.88 00:30:51.586 clat percentiles (usec): 00:30:51.586 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13698], 00:30:51.586 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13829], 60.00th=[13829], 00:30:51.586 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14091], 00:30:51.586 | 99.00th=[14222], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:30:51.586 | 99.99th=[14746] 00:30:51.586 bw ( KiB/s): min=27592, max=28416, per=33.36%, avg=27733.22, stdev=257.54, samples=9 00:30:51.586 iops : min= 215, max= 222, avg=216.56, stdev= 2.07, samples=9 00:30:51.586 lat (msec) : 20=100.00% 00:30:51.586 cpu : usr=92.56%, sys=6.74%, ctx=93, majf=0, minf=1073 00:30:51.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.586 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.586 filename0: (groupid=0, jobs=1): err= 0: pid=83024: Thu Apr 18 09:59:14 2024 00:30:51.586 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(135MiB/5002msec) 00:30:51.586 slat (nsec): min=8757, max=60417, avg=19785.70, stdev=7372.54 00:30:51.586 clat (usec): min=13254, max=15603, avg=13808.36, stdev=200.15 00:30:51.586 lat (usec): min=13268, max=15644, avg=13828.15, stdev=201.74 00:30:51.586 clat percentiles (usec): 00:30:51.586 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13698], 00:30:51.586 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13829], 60.00th=[13829], 00:30:51.586 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14091], 00:30:51.586 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15533], 99.95th=[15664], 00:30:51.586 | 99.99th=[15664] 00:30:51.586 bw ( KiB/s): min=27592, max=28416, per=33.35%, avg=27727.11, stdev=259.00, samples=9 00:30:51.586 iops : min= 215, max= 222, avg=216.56, stdev= 2.07, samples=9 00:30:51.586 lat (msec) : 20=100.00% 00:30:51.586 cpu : usr=92.00%, sys=7.38%, ctx=6, majf=0, minf=1075 00:30:51.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.586 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.586 filename0: (groupid=0, jobs=1): err= 0: pid=83025: Thu Apr 18 09:59:14 2024 00:30:51.586 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(135MiB/5001msec) 00:30:51.586 slat (nsec): min=5989, max=60619, avg=19763.50, stdev=6716.33 00:30:51.586 clat (usec): min=13243, max=14993, avg=13806.92, stdev=184.68 00:30:51.586 lat (usec): min=13258, max=15015, avg=13826.69, stdev=186.07 00:30:51.586 clat percentiles (usec): 00:30:51.586 | 1.00th=[13566], 5.00th=[13566], 10.00th=[13566], 20.00th=[13698], 00:30:51.586 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13829], 60.00th=[13829], 00:30:51.586 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14091], 00:30:51.586 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15008], 99.95th=[15008], 00:30:51.586 | 99.99th=[15008] 00:30:51.586 bw ( KiB/s): min=27592, max=28416, per=33.36%, avg=27733.22, stdev=257.54, samples=9 00:30:51.586 iops : min= 215, max= 222, avg=216.56, stdev= 2.07, samples=9 00:30:51.586 lat (msec) : 20=100.00% 00:30:51.586 cpu : usr=92.80%, sys=6.60%, ctx=13, majf=0, minf=1075 00:30:51.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.586 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.586 00:30:51.586 Run status group 0 (all jobs): 00:30:51.586 READ: bw=81.2MiB/s (85.1MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=406MiB (426MB), run=5001-5002msec 00:30:52.154 ----------------------------------------------------- 00:30:52.154 Suppressions used: 00:30:52.154 count bytes template 00:30:52.154 5 44 /usr/src/fio/parse.c 00:30:52.154 1 8 libtcmalloc_minimal.so 00:30:52.154 1 904 libcrypto.so 00:30:52.154 ----------------------------------------------------- 00:30:52.154 00:30:52.154 09:59:15 -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:52.154 09:59:15 -- target/dif.sh@43 -- # local sub 00:30:52.154 09:59:15 -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.154 09:59:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:52.154 09:59:15 -- target/dif.sh@36 -- # local sub_id=0 00:30:52.154 09:59:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@109 -- # NULL_DIF=2 00:30:52.154 09:59:15 -- target/dif.sh@109 -- # bs=4k 00:30:52.154 09:59:15 -- target/dif.sh@109 -- # numjobs=8 00:30:52.154 09:59:15 -- target/dif.sh@109 -- # iodepth=16 00:30:52.154 09:59:15 -- target/dif.sh@109 -- # runtime= 00:30:52.154 09:59:15 -- target/dif.sh@109 -- # files=2 00:30:52.154 09:59:15 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:52.154 09:59:15 -- target/dif.sh@28 -- # local sub 00:30:52.154 09:59:15 -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.154 09:59:15 -- target/dif.sh@31 -- # create_subsystem 0 00:30:52.154 09:59:15 -- target/dif.sh@18 -- # local sub_id=0 00:30:52.154 09:59:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 bdev_null0 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 [2024-04-18 09:59:15.444397] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.154 09:59:15 -- target/dif.sh@31 -- # create_subsystem 1 00:30:52.154 09:59:15 -- target/dif.sh@18 -- # local sub_id=1 00:30:52.154 09:59:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 bdev_null1 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.154 09:59:15 -- target/dif.sh@31 -- # create_subsystem 2 00:30:52.154 09:59:15 -- target/dif.sh@18 -- # local sub_id=2 00:30:52.154 09:59:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 bdev_null2 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:52.154 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.154 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:52.154 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.154 09:59:15 -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:52.154 09:59:15 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:52.154 09:59:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:52.154 09:59:15 -- nvmf/common.sh@521 -- # config=() 00:30:52.154 09:59:15 -- nvmf/common.sh@521 -- # local subsystem config 00:30:52.154 09:59:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:52.154 09:59:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.154 09:59:15 -- target/dif.sh@82 -- # gen_fio_conf 00:30:52.154 09:59:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:52.154 { 00:30:52.154 "params": { 00:30:52.154 "name": "Nvme$subsystem", 00:30:52.154 "trtype": "$TEST_TRANSPORT", 00:30:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.154 "adrfam": "ipv4", 00:30:52.154 "trsvcid": "$NVMF_PORT", 00:30:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.154 "hdgst": ${hdgst:-false}, 00:30:52.154 "ddgst": ${ddgst:-false} 00:30:52.154 }, 00:30:52.154 "method": "bdev_nvme_attach_controller" 00:30:52.154 } 00:30:52.154 EOF 00:30:52.154 )") 00:30:52.154 09:59:15 -- target/dif.sh@54 -- # local file 00:30:52.154 09:59:15 -- target/dif.sh@56 -- # cat 00:30:52.154 09:59:15 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.154 09:59:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:52.154 09:59:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:52.154 09:59:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:52.154 09:59:15 -- nvmf/common.sh@543 -- # cat 00:30:52.154 09:59:15 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.154 09:59:15 -- common/autotest_common.sh@1327 -- # shift 00:30:52.154 09:59:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:52.154 09:59:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.154 09:59:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:52.154 09:59:15 -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.154 09:59:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.154 09:59:15 -- target/dif.sh@73 -- # cat 00:30:52.154 09:59:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:52.154 09:59:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:52.154 09:59:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:52.155 09:59:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:52.155 { 00:30:52.155 "params": { 00:30:52.155 "name": "Nvme$subsystem", 00:30:52.155 "trtype": "$TEST_TRANSPORT", 00:30:52.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.155 "adrfam": "ipv4", 00:30:52.155 "trsvcid": "$NVMF_PORT", 00:30:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.155 "hdgst": ${hdgst:-false}, 00:30:52.155 "ddgst": ${ddgst:-false} 00:30:52.155 }, 00:30:52.155 "method": "bdev_nvme_attach_controller" 00:30:52.155 } 00:30:52.155 EOF 00:30:52.155 )") 00:30:52.155 09:59:15 -- target/dif.sh@72 -- # (( file++ )) 00:30:52.155 09:59:15 -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.155 09:59:15 -- target/dif.sh@73 -- # cat 00:30:52.155 09:59:15 -- nvmf/common.sh@543 -- # cat 00:30:52.155 09:59:15 -- target/dif.sh@72 -- # (( file++ )) 00:30:52.155 09:59:15 -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.155 09:59:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:52.155 09:59:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:52.155 { 00:30:52.155 "params": { 00:30:52.155 "name": "Nvme$subsystem", 00:30:52.155 "trtype": "$TEST_TRANSPORT", 00:30:52.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.155 "adrfam": "ipv4", 00:30:52.155 "trsvcid": "$NVMF_PORT", 00:30:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.155 "hdgst": ${hdgst:-false}, 00:30:52.155 "ddgst": ${ddgst:-false} 00:30:52.155 }, 00:30:52.155 "method": "bdev_nvme_attach_controller" 00:30:52.155 } 00:30:52.155 EOF 00:30:52.155 )") 00:30:52.155 09:59:15 -- nvmf/common.sh@543 -- # cat 00:30:52.155 09:59:15 -- nvmf/common.sh@545 -- # jq . 00:30:52.155 09:59:15 -- nvmf/common.sh@546 -- # IFS=, 00:30:52.155 09:59:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:52.155 "params": { 00:30:52.155 "name": "Nvme0", 00:30:52.155 "trtype": "tcp", 00:30:52.155 "traddr": "10.0.0.2", 00:30:52.155 "adrfam": "ipv4", 00:30:52.155 "trsvcid": "4420", 00:30:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.155 "hdgst": false, 00:30:52.155 "ddgst": false 00:30:52.155 }, 00:30:52.155 "method": "bdev_nvme_attach_controller" 00:30:52.155 },{ 00:30:52.155 "params": { 00:30:52.155 "name": "Nvme1", 00:30:52.155 "trtype": "tcp", 00:30:52.155 "traddr": "10.0.0.2", 00:30:52.155 "adrfam": "ipv4", 00:30:52.155 "trsvcid": "4420", 00:30:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.155 "hdgst": false, 00:30:52.155 "ddgst": false 00:30:52.155 }, 00:30:52.155 "method": "bdev_nvme_attach_controller" 00:30:52.155 },{ 00:30:52.155 "params": { 00:30:52.155 "name": "Nvme2", 00:30:52.155 "trtype": "tcp", 00:30:52.155 "traddr": "10.0.0.2", 00:30:52.155 "adrfam": "ipv4", 00:30:52.155 "trsvcid": "4420", 00:30:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:52.155 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:52.155 "hdgst": false, 00:30:52.155 "ddgst": false 00:30:52.155 }, 00:30:52.155 "method": "bdev_nvme_attach_controller" 00:30:52.155 }' 00:30:52.414 09:59:15 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:52.414 09:59:15 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:52.414 09:59:15 -- common/autotest_common.sh@1333 -- # break 00:30:52.414 09:59:15 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:52.414 09:59:15 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.414 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:52.414 ... 00:30:52.414 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:52.414 ... 00:30:52.414 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:52.414 ... 00:30:52.414 fio-3.35 00:30:52.414 Starting 24 threads 00:31:04.679 00:31:04.679 filename0: (groupid=0, jobs=1): err= 0: pid=83124: Thu Apr 18 09:59:26 2024 00:31:04.679 read: IOPS=149, BW=598KiB/s (612kB/s)(5984KiB/10010msec) 00:31:04.679 slat (usec): min=4, max=8026, avg=21.16, stdev=207.19 00:31:04.679 clat (msec): min=12, max=204, avg=106.91, stdev=32.37 00:31:04.679 lat (msec): min=12, max=204, avg=106.94, stdev=32.36 00:31:04.679 clat percentiles (msec): 00:31:04.679 | 1.00th=[ 28], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 75], 00:31:04.679 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 109], 60.00th=[ 131], 00:31:04.679 | 70.00th=[ 132], 80.00th=[ 133], 90.00th=[ 144], 95.00th=[ 144], 00:31:04.679 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.679 | 99.99th=[ 205] 00:31:04.679 bw ( KiB/s): min= 396, max= 872, per=4.23%, avg=582.53, stdev=147.87, samples=19 00:31:04.679 iops : min= 99, max= 218, avg=145.63, stdev=36.97, samples=19 00:31:04.680 lat (msec) : 20=0.40%, 50=3.61%, 100=42.11%, 250=53.88% 00:31:04.680 cpu : usr=31.51%, sys=1.98%, ctx=867, majf=0, minf=1074 00:31:04.680 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=90.3%, 8=7.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83125: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=155, BW=623KiB/s (638kB/s)(6292KiB/10098msec) 00:31:04.680 slat (usec): min=5, max=3061, avg=18.35, stdev=77.11 00:31:04.680 clat (msec): min=4, max=204, avg=102.31, stdev=44.63 00:31:04.680 lat (msec): min=4, max=204, avg=102.33, stdev=44.63 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 47], 20.00th=[ 70], 00:31:04.680 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 110], 00:31:04.680 | 70.00th=[ 132], 80.00th=[ 144], 90.00th=[ 161], 95.00th=[ 171], 00:31:04.680 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.680 | 99.99th=[ 205] 00:31:04.680 bw ( KiB/s): min= 384, max= 1664, per=4.53%, avg=622.65, stdev=285.56, samples=20 00:31:04.680 iops : min= 96, max= 416, avg=155.65, stdev=71.40, samples=20 00:31:04.680 lat (msec) : 10=5.98%, 20=1.14%, 50=3.88%, 100=42.59%, 250=46.41% 00:31:04.680 cpu : usr=36.13%, sys=2.06%, ctx=1132, majf=0, minf=1076 00:31:04.680 IO depths : 1=0.4%, 2=3.1%, 4=10.8%, 8=71.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83126: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=137, BW=551KiB/s (564kB/s)(5512KiB/10004msec) 00:31:04.680 slat (usec): min=5, max=4038, avg=19.69, stdev=109.46 00:31:04.680 clat (msec): min=3, max=205, avg=115.98, stdev=25.58 00:31:04.680 lat (msec): min=3, max=205, avg=116.00, stdev=25.58 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 13], 5.00th=[ 84], 10.00th=[ 88], 20.00th=[ 94], 00:31:04.680 | 30.00th=[ 96], 40.00th=[ 110], 50.00th=[ 124], 60.00th=[ 130], 00:31:04.680 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 144], 00:31:04.680 | 99.00th=[ 178], 99.50th=[ 178], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.680 | 99.99th=[ 205] 00:31:04.680 bw ( KiB/s): min= 400, max= 752, per=3.91%, avg=538.95, stdev=103.70, samples=19 00:31:04.680 iops : min= 100, max= 188, avg=134.74, stdev=25.92, samples=19 00:31:04.680 lat (msec) : 4=0.15%, 10=0.15%, 20=0.87%, 50=0.15%, 100=32.80% 00:31:04.680 lat (msec) : 250=65.89% 00:31:04.680 cpu : usr=42.40%, sys=2.53%, ctx=1389, majf=0, minf=1075 00:31:04.680 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83127: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=147, BW=592KiB/s (606kB/s)(5948KiB/10052msec) 00:31:04.680 slat (usec): min=4, max=8038, avg=30.23, stdev=296.60 00:31:04.680 clat (msec): min=36, max=192, avg=107.80, stdev=32.14 00:31:04.680 lat (msec): min=36, max=192, avg=107.83, stdev=32.14 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 78], 00:31:04.680 | 30.00th=[ 89], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 128], 00:31:04.680 | 70.00th=[ 132], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 155], 00:31:04.680 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 192], 00:31:04.680 | 99.99th=[ 192] 00:31:04.680 bw ( KiB/s): min= 400, max= 872, per=4.29%, avg=590.50, stdev=152.48, samples=20 00:31:04.680 iops : min= 100, max= 218, avg=147.60, stdev=38.12, samples=20 00:31:04.680 lat (msec) : 50=3.36%, 100=40.01%, 250=56.62% 00:31:04.680 cpu : usr=42.93%, sys=2.57%, ctx=1456, majf=0, minf=1075 00:31:04.680 IO depths : 1=0.1%, 2=3.0%, 4=11.8%, 8=71.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=90.2%, 8=7.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83128: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=140, BW=562KiB/s (575kB/s)(5624KiB/10009msec) 00:31:04.680 slat (usec): min=5, max=8038, avg=27.07, stdev=302.42 00:31:04.680 clat (msec): min=4, max=179, avg=113.68, stdev=29.62 00:31:04.680 lat (msec): min=4, max=179, avg=113.71, stdev=29.62 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 10], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 95], 00:31:04.680 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 122], 60.00th=[ 132], 00:31:04.680 | 70.00th=[ 132], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 144], 00:31:04.680 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:31:04.680 | 99.99th=[ 180] 00:31:04.680 bw ( KiB/s): min= 400, max= 752, per=3.91%, avg=538.37, stdev=95.28, samples=19 00:31:04.680 iops : min= 100, max= 188, avg=134.53, stdev=23.76, samples=19 00:31:04.680 lat (msec) : 10=1.14%, 20=1.99%, 50=1.28%, 100=32.15%, 250=63.44% 00:31:04.680 cpu : usr=32.12%, sys=1.78%, ctx=853, majf=0, minf=1074 00:31:04.680 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83129: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=143, BW=574KiB/s (588kB/s)(5792KiB/10087msec) 00:31:04.680 slat (usec): min=5, max=8048, avg=37.95, stdev=421.14 00:31:04.680 clat (usec): min=452, max=203900, avg=110817.62, stdev=43623.98 00:31:04.680 lat (usec): min=464, max=203926, avg=110855.56, stdev=43640.82 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 41], 20.00th=[ 85], 00:31:04.680 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 122], 60.00th=[ 132], 00:31:04.680 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 148], 95.00th=[ 171], 00:31:04.680 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.680 | 99.99th=[ 205] 00:31:04.680 bw ( KiB/s): min= 384, max= 1776, per=4.18%, avg=575.00, stdev=296.43, samples=20 00:31:04.680 iops : min= 96, max= 444, avg=143.75, stdev=74.11, samples=20 00:31:04.680 lat (usec) : 500=0.14% 00:31:04.680 lat (msec) : 4=2.07%, 10=5.39%, 20=1.10%, 50=2.21%, 100=26.73% 00:31:04.680 lat (msec) : 250=62.36% 00:31:04.680 cpu : usr=32.12%, sys=1.70%, ctx=875, majf=0, minf=1074 00:31:04.680 IO depths : 1=0.5%, 2=6.2%, 4=23.3%, 8=57.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=94.0%, 8=0.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83130: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=130, BW=523KiB/s (535kB/s)(5248KiB/10042msec) 00:31:04.680 slat (usec): min=5, max=19043, avg=36.40, stdev=548.17 00:31:04.680 clat (msec): min=48, max=199, avg=122.11, stdev=24.19 00:31:04.680 lat (msec): min=48, max=199, avg=122.15, stdev=24.19 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 71], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 96], 00:31:04.680 | 30.00th=[ 107], 40.00th=[ 120], 50.00th=[ 130], 60.00th=[ 132], 00:31:04.680 | 70.00th=[ 134], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 167], 00:31:04.680 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 201], 99.95th=[ 201], 00:31:04.680 | 99.99th=[ 201] 00:31:04.680 bw ( KiB/s): min= 384, max= 656, per=3.77%, avg=518.25, stdev=90.18, samples=20 00:31:04.680 iops : min= 96, max= 164, avg=129.55, stdev=22.56, samples=20 00:31:04.680 lat (msec) : 50=0.15%, 100=25.76%, 250=74.09% 00:31:04.680 cpu : usr=42.42%, sys=2.43%, ctx=1275, majf=0, minf=1075 00:31:04.680 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename0: (groupid=0, jobs=1): err= 0: pid=83131: Thu Apr 18 09:59:26 2024 00:31:04.680 read: IOPS=132, BW=530KiB/s (542kB/s)(5312KiB/10027msec) 00:31:04.680 slat (usec): min=5, max=589, avg=16.36, stdev=16.51 00:31:04.680 clat (msec): min=33, max=200, avg=120.59, stdev=22.97 00:31:04.680 lat (msec): min=33, max=200, avg=120.60, stdev=22.97 00:31:04.680 clat percentiles (msec): 00:31:04.680 | 1.00th=[ 53], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 96], 00:31:04.680 | 30.00th=[ 106], 40.00th=[ 120], 50.00th=[ 130], 60.00th=[ 132], 00:31:04.680 | 70.00th=[ 134], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 146], 00:31:04.680 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 201], 99.95th=[ 201], 00:31:04.680 | 99.99th=[ 201] 00:31:04.680 bw ( KiB/s): min= 384, max= 640, per=3.77%, avg=518.74, stdev=84.38, samples=19 00:31:04.680 iops : min= 96, max= 160, avg=129.68, stdev=21.10, samples=19 00:31:04.680 lat (msec) : 50=0.15%, 100=26.20%, 250=73.64% 00:31:04.680 cpu : usr=34.68%, sys=2.07%, ctx=1021, majf=0, minf=1074 00:31:04.680 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.680 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.680 filename1: (groupid=0, jobs=1): err= 0: pid=83132: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=133, BW=534KiB/s (547kB/s)(5368KiB/10047msec) 00:31:04.681 slat (usec): min=5, max=8054, avg=22.52, stdev=219.47 00:31:04.681 clat (msec): min=55, max=204, avg=119.62, stdev=25.22 00:31:04.681 lat (msec): min=55, max=204, avg=119.64, stdev=25.22 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 56], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 96], 00:31:04.681 | 30.00th=[ 96], 40.00th=[ 109], 50.00th=[ 123], 60.00th=[ 132], 00:31:04.681 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 169], 00:31:04.681 | 99.00th=[ 171], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.681 | 99.99th=[ 205] 00:31:04.681 bw ( KiB/s): min= 384, max= 656, per=3.86%, avg=530.00, stdev=92.70, samples=20 00:31:04.681 iops : min= 96, max= 164, avg=132.50, stdev=23.18, samples=20 00:31:04.681 lat (msec) : 100=33.08%, 250=66.92% 00:31:04.681 cpu : usr=31.27%, sys=1.88%, ctx=847, majf=0, minf=1075 00:31:04.681 IO depths : 1=0.1%, 2=6.3%, 4=24.7%, 8=56.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83133: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=131, BW=528KiB/s (540kB/s)(5320KiB/10079msec) 00:31:04.681 slat (usec): min=5, max=4029, avg=19.74, stdev=110.30 00:31:04.681 clat (msec): min=45, max=203, avg=120.74, stdev=24.59 00:31:04.681 lat (msec): min=45, max=203, avg=120.76, stdev=24.59 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 46], 5.00th=[ 84], 10.00th=[ 91], 20.00th=[ 96], 00:31:04.681 | 30.00th=[ 106], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 132], 00:31:04.681 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 153], 00:31:04.681 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 203], 99.95th=[ 203], 00:31:04.681 | 99.99th=[ 203] 00:31:04.681 bw ( KiB/s): min= 384, max= 752, per=3.82%, avg=525.60, stdev=103.47, samples=20 00:31:04.681 iops : min= 96, max= 188, avg=131.40, stdev=25.87, samples=20 00:31:04.681 lat (msec) : 50=2.11%, 100=22.33%, 250=75.56% 00:31:04.681 cpu : usr=38.13%, sys=1.90%, ctx=1248, majf=0, minf=1076 00:31:04.681 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=94.5%, 8=0.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83134: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=140, BW=562KiB/s (576kB/s)(5628KiB/10010msec) 00:31:04.681 slat (usec): min=5, max=8028, avg=28.80, stdev=261.69 00:31:04.681 clat (msec): min=11, max=202, avg=113.66, stdev=29.03 00:31:04.681 lat (msec): min=11, max=202, avg=113.69, stdev=29.03 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 27], 5.00th=[ 62], 10.00th=[ 81], 20.00th=[ 89], 00:31:04.681 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 123], 60.00th=[ 130], 00:31:04.681 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 148], 00:31:04.681 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 203], 99.95th=[ 203], 00:31:04.681 | 99.99th=[ 203] 00:31:04.681 bw ( KiB/s): min= 384, max= 768, per=3.97%, avg=545.05, stdev=107.91, samples=19 00:31:04.681 iops : min= 96, max= 192, avg=136.26, stdev=26.98, samples=19 00:31:04.681 lat (msec) : 20=0.50%, 50=1.49%, 100=38.17%, 250=59.84% 00:31:04.681 cpu : usr=38.09%, sys=2.28%, ctx=1089, majf=0, minf=1075 00:31:04.681 IO depths : 1=0.1%, 2=4.8%, 4=19.0%, 8=63.0%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=92.5%, 8=3.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83135: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=139, BW=557KiB/s (570kB/s)(5600KiB/10060msec) 00:31:04.681 slat (nsec): min=5331, max=96792, avg=15296.40, stdev=6299.80 00:31:04.681 clat (msec): min=24, max=204, avg=114.69, stdev=35.99 00:31:04.681 lat (msec): min=24, max=204, avg=114.70, stdev=35.99 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 25], 5.00th=[ 62], 10.00th=[ 70], 20.00th=[ 84], 00:31:04.681 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 113], 60.00th=[ 132], 00:31:04.681 | 70.00th=[ 136], 80.00th=[ 144], 90.00th=[ 169], 95.00th=[ 180], 00:31:04.681 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.681 | 99.99th=[ 205] 00:31:04.681 bw ( KiB/s): min= 384, max= 880, per=4.02%, avg=553.60, stdev=158.89, samples=20 00:31:04.681 iops : min= 96, max= 220, avg=138.40, stdev=39.72, samples=20 00:31:04.681 lat (msec) : 50=2.57%, 100=42.71%, 250=54.71% 00:31:04.681 cpu : usr=36.89%, sys=2.09%, ctx=1082, majf=0, minf=1075 00:31:04.681 IO depths : 1=0.1%, 2=4.4%, 4=17.1%, 8=65.0%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=91.9%, 8=4.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83136: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=144, BW=576KiB/s (590kB/s)(5804KiB/10075msec) 00:31:04.681 slat (usec): min=4, max=4032, avg=24.71, stdev=169.11 00:31:04.681 clat (msec): min=40, max=199, avg=110.74, stdev=32.82 00:31:04.681 lat (msec): min=40, max=199, avg=110.76, stdev=32.82 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 45], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 81], 00:31:04.681 | 30.00th=[ 91], 40.00th=[ 97], 50.00th=[ 115], 60.00th=[ 128], 00:31:04.681 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 165], 00:31:04.681 | 99.00th=[ 199], 99.50th=[ 199], 99.90th=[ 199], 99.95th=[ 199], 00:31:04.681 | 99.99th=[ 199] 00:31:04.681 bw ( KiB/s): min= 384, max= 872, per=4.18%, avg=574.00, stdev=146.75, samples=20 00:31:04.681 iops : min= 96, max= 218, avg=143.50, stdev=36.69, samples=20 00:31:04.681 lat (msec) : 50=2.41%, 100=41.14%, 250=56.44% 00:31:04.681 cpu : usr=42.40%, sys=2.34%, ctx=1369, majf=0, minf=1075 00:31:04.681 IO depths : 1=0.1%, 2=4.1%, 4=16.2%, 8=66.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=91.6%, 8=4.9%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83137: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=160, BW=644KiB/s (659kB/s)(6468KiB/10045msec) 00:31:04.681 slat (usec): min=5, max=8034, avg=28.79, stdev=298.97 00:31:04.681 clat (msec): min=35, max=178, avg=99.13, stdev=27.40 00:31:04.681 lat (msec): min=35, max=178, avg=99.16, stdev=27.41 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 46], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 74], 00:31:04.681 | 30.00th=[ 85], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 102], 00:31:04.681 | 70.00th=[ 118], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 142], 00:31:04.681 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:31:04.681 | 99.99th=[ 180] 00:31:04.681 bw ( KiB/s): min= 512, max= 824, per=4.67%, avg=642.65, stdev=93.99, samples=20 00:31:04.681 iops : min= 128, max= 206, avg=160.65, stdev=23.49, samples=20 00:31:04.681 lat (msec) : 50=3.34%, 100=55.97%, 250=40.69% 00:31:04.681 cpu : usr=34.90%, sys=2.10%, ctx=1143, majf=0, minf=1073 00:31:04.681 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83138: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=159, BW=637KiB/s (652kB/s)(6392KiB/10034msec) 00:31:04.681 slat (usec): min=4, max=8030, avg=25.81, stdev=283.42 00:31:04.681 clat (msec): min=25, max=176, avg=100.28, stdev=27.89 00:31:04.681 lat (msec): min=25, max=176, avg=100.31, stdev=27.90 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 74], 00:31:04.681 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 107], 00:31:04.681 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 138], 95.00th=[ 144], 00:31:04.681 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:31:04.681 | 99.99th=[ 178] 00:31:04.681 bw ( KiB/s): min= 512, max= 872, per=4.61%, avg=634.95, stdev=106.34, samples=20 00:31:04.681 iops : min= 128, max= 218, avg=158.70, stdev=26.56, samples=20 00:31:04.681 lat (msec) : 50=3.00%, 100=54.51%, 250=42.49% 00:31:04.681 cpu : usr=35.00%, sys=1.95%, ctx=1044, majf=0, minf=1073 00:31:04.681 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:04.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.681 issued rwts: total=1598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.681 filename1: (groupid=0, jobs=1): err= 0: pid=83139: Thu Apr 18 09:59:26 2024 00:31:04.681 read: IOPS=133, BW=535KiB/s (548kB/s)(5368KiB/10025msec) 00:31:04.681 slat (usec): min=5, max=8042, avg=26.06, stdev=249.80 00:31:04.681 clat (msec): min=50, max=201, avg=119.30, stdev=24.22 00:31:04.681 lat (msec): min=50, max=201, avg=119.33, stdev=24.23 00:31:04.681 clat percentiles (msec): 00:31:04.681 | 1.00th=[ 51], 5.00th=[ 83], 10.00th=[ 92], 20.00th=[ 95], 00:31:04.681 | 30.00th=[ 101], 40.00th=[ 117], 50.00th=[ 125], 60.00th=[ 131], 00:31:04.681 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 148], 00:31:04.681 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 203], 99.95th=[ 203], 00:31:04.681 | 99.99th=[ 203] 00:31:04.681 bw ( KiB/s): min= 400, max= 656, per=3.82%, avg=525.32, stdev=88.78, samples=19 00:31:04.681 iops : min= 100, max= 164, avg=131.32, stdev=22.18, samples=19 00:31:04.681 lat (msec) : 100=31.00%, 250=69.00% 00:31:04.681 cpu : usr=40.25%, sys=2.18%, ctx=1584, majf=0, minf=1073 00:31:04.681 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83140: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=140, BW=562KiB/s (575kB/s)(5640KiB/10040msec) 00:31:04.682 slat (usec): min=5, max=4028, avg=19.31, stdev=107.01 00:31:04.682 clat (msec): min=48, max=180, avg=113.62, stdev=27.56 00:31:04.682 lat (msec): min=48, max=181, avg=113.64, stdev=27.56 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 52], 5.00th=[ 69], 10.00th=[ 80], 20.00th=[ 88], 00:31:04.682 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 121], 60.00th=[ 128], 00:31:04.682 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:31:04.682 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 182], 99.95th=[ 182], 00:31:04.682 | 99.99th=[ 182] 00:31:04.682 bw ( KiB/s): min= 384, max= 768, per=4.07%, avg=559.85, stdev=125.11, samples=20 00:31:04.682 iops : min= 96, max= 192, avg=139.95, stdev=31.27, samples=20 00:31:04.682 lat (msec) : 50=0.07%, 100=38.09%, 250=61.84% 00:31:04.682 cpu : usr=40.78%, sys=2.36%, ctx=1312, majf=0, minf=1074 00:31:04.682 IO depths : 1=0.1%, 2=4.8%, 4=19.1%, 8=62.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=92.6%, 8=3.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83141: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=159, BW=639KiB/s (655kB/s)(6408KiB/10025msec) 00:31:04.682 slat (usec): min=4, max=8028, avg=21.61, stdev=200.26 00:31:04.682 clat (msec): min=26, max=176, avg=99.93, stdev=28.39 00:31:04.682 lat (msec): min=26, max=176, avg=99.95, stdev=28.39 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 43], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 74], 00:31:04.682 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 106], 00:31:04.682 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 136], 95.00th=[ 144], 00:31:04.682 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:31:04.682 | 99.99th=[ 178] 00:31:04.682 bw ( KiB/s): min= 512, max= 824, per=4.60%, avg=632.89, stdev=101.15, samples=19 00:31:04.682 iops : min= 128, max= 206, avg=158.21, stdev=25.30, samples=19 00:31:04.682 lat (msec) : 50=3.06%, 100=55.49%, 250=41.45% 00:31:04.682 cpu : usr=35.73%, sys=2.00%, ctx=1059, majf=0, minf=1074 00:31:04.682 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83142: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=139, BW=556KiB/s (570kB/s)(5568KiB/10010msec) 00:31:04.682 slat (usec): min=4, max=3690, avg=21.07, stdev=127.46 00:31:04.682 clat (msec): min=4, max=184, avg=114.80, stdev=27.67 00:31:04.682 lat (msec): min=4, max=184, avg=114.82, stdev=27.67 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 13], 5.00th=[ 82], 10.00th=[ 88], 20.00th=[ 93], 00:31:04.682 | 30.00th=[ 96], 40.00th=[ 110], 50.00th=[ 126], 60.00th=[ 131], 00:31:04.682 | 70.00th=[ 133], 80.00th=[ 136], 90.00th=[ 142], 95.00th=[ 144], 00:31:04.682 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 184], 00:31:04.682 | 99.99th=[ 184] 00:31:04.682 bw ( KiB/s): min= 400, max= 768, per=3.91%, avg=538.11, stdev=103.99, samples=19 00:31:04.682 iops : min= 100, max= 192, avg=134.53, stdev=26.00, samples=19 00:31:04.682 lat (msec) : 10=0.14%, 20=2.01%, 50=0.29%, 100=32.33%, 250=65.23% 00:31:04.682 cpu : usr=38.59%, sys=2.33%, ctx=1261, majf=0, minf=1075 00:31:04.682 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83143: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=144, BW=578KiB/s (592kB/s)(5832KiB/10084msec) 00:31:04.682 slat (usec): min=5, max=8049, avg=45.42, stdev=445.66 00:31:04.682 clat (msec): min=6, max=203, avg=110.01, stdev=38.17 00:31:04.682 lat (msec): min=6, max=203, avg=110.05, stdev=38.18 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 7], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 83], 00:31:04.682 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 132], 00:31:04.682 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 157], 95.00th=[ 171], 00:31:04.682 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:31:04.682 | 99.99th=[ 205] 00:31:04.682 bw ( KiB/s): min= 384, max= 1136, per=4.21%, avg=579.00, stdev=190.16, samples=20 00:31:04.682 iops : min= 96, max= 284, avg=144.75, stdev=47.54, samples=20 00:31:04.682 lat (msec) : 10=1.10%, 20=1.10%, 50=3.16%, 100=40.33%, 250=54.32% 00:31:04.682 cpu : usr=32.29%, sys=1.84%, ctx=870, majf=0, minf=1073 00:31:04.682 IO depths : 1=0.1%, 2=3.8%, 4=15.3%, 8=66.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=91.5%, 8=5.2%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83144: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=159, BW=636KiB/s (652kB/s)(6396KiB/10052msec) 00:31:04.682 slat (usec): min=5, max=8028, avg=30.50, stdev=346.87 00:31:04.682 clat (msec): min=24, max=171, avg=100.26, stdev=27.44 00:31:04.682 lat (msec): min=24, max=171, avg=100.29, stdev=27.45 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 41], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 77], 00:31:04.682 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 107], 00:31:04.682 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 134], 95.00th=[ 142], 00:31:04.682 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 171], 99.95th=[ 171], 00:31:04.682 | 99.99th=[ 171] 00:31:04.682 bw ( KiB/s): min= 496, max= 872, per=4.62%, avg=635.70, stdev=103.97, samples=20 00:31:04.682 iops : min= 124, max= 218, avg=158.90, stdev=25.97, samples=20 00:31:04.682 lat (msec) : 50=4.00%, 100=53.60%, 250=42.40% 00:31:04.682 cpu : usr=32.09%, sys=1.57%, ctx=861, majf=0, minf=1072 00:31:04.682 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83145: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=138, BW=556KiB/s (569kB/s)(5564KiB/10009msec) 00:31:04.682 slat (usec): min=4, max=8041, avg=21.72, stdev=215.26 00:31:04.682 clat (msec): min=12, max=188, avg=114.92, stdev=27.34 00:31:04.682 lat (msec): min=12, max=188, avg=114.94, stdev=27.35 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 29], 5.00th=[ 70], 10.00th=[ 82], 20.00th=[ 94], 00:31:04.682 | 30.00th=[ 96], 40.00th=[ 112], 50.00th=[ 125], 60.00th=[ 130], 00:31:04.682 | 70.00th=[ 133], 80.00th=[ 136], 90.00th=[ 142], 95.00th=[ 144], 00:31:04.682 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 188], 00:31:04.682 | 99.99th=[ 188] 00:31:04.682 bw ( KiB/s): min= 396, max= 768, per=3.92%, avg=539.63, stdev=94.75, samples=19 00:31:04.682 iops : min= 99, max= 192, avg=134.89, stdev=23.68, samples=19 00:31:04.682 lat (msec) : 20=0.29%, 50=1.51%, 100=33.64%, 250=64.56% 00:31:04.682 cpu : usr=36.57%, sys=2.34%, ctx=1070, majf=0, minf=1073 00:31:04.682 IO depths : 1=0.1%, 2=4.8%, 4=19.2%, 8=62.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=92.6%, 8=3.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83146: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=139, BW=557KiB/s (570kB/s)(5568KiB/10004msec) 00:31:04.682 slat (usec): min=5, max=4032, avg=21.56, stdev=152.23 00:31:04.682 clat (msec): min=6, max=198, avg=114.84, stdev=28.95 00:31:04.682 lat (msec): min=6, max=198, avg=114.86, stdev=28.95 00:31:04.682 clat percentiles (msec): 00:31:04.682 | 1.00th=[ 7], 5.00th=[ 80], 10.00th=[ 89], 20.00th=[ 94], 00:31:04.682 | 30.00th=[ 96], 40.00th=[ 107], 50.00th=[ 126], 60.00th=[ 130], 00:31:04.682 | 70.00th=[ 133], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 146], 00:31:04.682 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 199], 99.95th=[ 199], 00:31:04.682 | 99.99th=[ 199] 00:31:04.682 bw ( KiB/s): min= 400, max= 752, per=3.86%, avg=531.89, stdev=93.66, samples=19 00:31:04.682 iops : min= 100, max= 188, avg=132.95, stdev=23.38, samples=19 00:31:04.682 lat (msec) : 10=1.01%, 20=1.44%, 50=1.01%, 100=29.89%, 250=66.67% 00:31:04.682 cpu : usr=41.60%, sys=2.22%, ctx=1403, majf=0, minf=1075 00:31:04.682 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 complete : 0=0.0%, 4=94.5%, 8=0.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.682 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.682 filename2: (groupid=0, jobs=1): err= 0: pid=83147: Thu Apr 18 09:59:26 2024 00:31:04.682 read: IOPS=152, BW=611KiB/s (625kB/s)(6152KiB/10075msec) 00:31:04.682 slat (usec): min=5, max=10042, avg=28.44, stdev=293.73 00:31:04.682 clat (msec): min=38, max=200, avg=104.50, stdev=32.66 00:31:04.682 lat (msec): min=38, max=200, avg=104.53, stdev=32.66 00:31:04.682 clat percentiles (msec): 00:31:04.683 | 1.00th=[ 42], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 75], 00:31:04.683 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 115], 00:31:04.683 | 70.00th=[ 130], 80.00th=[ 134], 90.00th=[ 142], 95.00th=[ 148], 00:31:04.683 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:31:04.683 | 99.99th=[ 201] 00:31:04.683 bw ( KiB/s): min= 336, max= 864, per=4.42%, avg=608.85, stdev=152.68, samples=20 00:31:04.683 iops : min= 84, max= 216, avg=152.20, stdev=38.16, samples=20 00:31:04.683 lat (msec) : 50=3.97%, 100=48.63%, 250=47.40% 00:31:04.683 cpu : usr=40.70%, sys=2.24%, ctx=1529, majf=0, minf=1073 00:31:04.683 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.683 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.683 issued rwts: total=1538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:04.683 00:31:04.683 Run status group 0 (all jobs): 00:31:04.683 READ: bw=13.4MiB/s (14.1MB/s), 523KiB/s-644KiB/s (535kB/s-659kB/s), io=136MiB (142MB), run=10004-10098msec 00:31:04.942 ----------------------------------------------------- 00:31:04.942 Suppressions used: 00:31:04.942 count bytes template 00:31:04.942 45 402 /usr/src/fio/parse.c 00:31:04.942 1 8 libtcmalloc_minimal.so 00:31:04.942 1 904 libcrypto.so 00:31:04.942 ----------------------------------------------------- 00:31:04.942 00:31:04.942 09:59:28 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:04.942 09:59:28 -- target/dif.sh@43 -- # local sub 00:31:04.942 09:59:28 -- target/dif.sh@45 -- # for sub in "$@" 00:31:04.942 09:59:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:04.942 09:59:28 -- target/dif.sh@36 -- # local sub_id=0 00:31:04.942 09:59:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@45 -- # for sub in "$@" 00:31:04.942 09:59:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:04.942 09:59:28 -- target/dif.sh@36 -- # local sub_id=1 00:31:04.942 09:59:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@45 -- # for sub in "$@" 00:31:04.942 09:59:28 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:04.942 09:59:28 -- target/dif.sh@36 -- # local sub_id=2 00:31:04.942 09:59:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:04.942 09:59:28 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:04.942 09:59:28 -- target/dif.sh@115 -- # numjobs=2 00:31:04.942 09:59:28 -- target/dif.sh@115 -- # iodepth=8 00:31:04.942 09:59:28 -- target/dif.sh@115 -- # runtime=5 00:31:04.942 09:59:28 -- target/dif.sh@115 -- # files=1 00:31:04.942 09:59:28 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:04.942 09:59:28 -- target/dif.sh@28 -- # local sub 00:31:04.942 09:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:31:04.942 09:59:28 -- target/dif.sh@31 -- # create_subsystem 0 00:31:04.942 09:59:28 -- target/dif.sh@18 -- # local sub_id=0 00:31:04.942 09:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 bdev_null0 00:31:04.942 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.942 09:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:04.942 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.942 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.202 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.202 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.202 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.202 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 [2024-04-18 09:59:28.350768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.202 09:59:28 -- target/dif.sh@31 -- # create_subsystem 1 00:31:05.202 09:59:28 -- target/dif.sh@18 -- # local sub_id=1 00:31:05.202 09:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:05.202 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.202 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 bdev_null1 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:05.202 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.202 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:05.202 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.202 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.202 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.202 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:31:05.202 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.202 09:59:28 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:05.202 09:59:28 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:05.202 09:59:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:05.202 09:59:28 -- nvmf/common.sh@521 -- # config=() 00:31:05.202 09:59:28 -- nvmf/common.sh@521 -- # local subsystem config 00:31:05.202 09:59:28 -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.202 09:59:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.202 09:59:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:05.202 09:59:28 -- target/dif.sh@54 -- # local file 00:31:05.202 09:59:28 -- target/dif.sh@56 -- # cat 00:31:05.202 09:59:28 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.202 09:59:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:05.202 { 00:31:05.202 "params": { 00:31:05.202 "name": "Nvme$subsystem", 00:31:05.202 "trtype": "$TEST_TRANSPORT", 00:31:05.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.202 "adrfam": "ipv4", 00:31:05.202 "trsvcid": "$NVMF_PORT", 00:31:05.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.202 "hdgst": ${hdgst:-false}, 00:31:05.202 "ddgst": ${ddgst:-false} 00:31:05.202 }, 00:31:05.202 "method": "bdev_nvme_attach_controller" 00:31:05.202 } 00:31:05.202 EOF 00:31:05.202 )") 00:31:05.202 09:59:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:05.202 09:59:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.202 09:59:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:05.202 09:59:28 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.202 09:59:28 -- common/autotest_common.sh@1327 -- # shift 00:31:05.202 09:59:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:05.202 09:59:28 -- nvmf/common.sh@543 -- # cat 00:31:05.202 09:59:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.202 09:59:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.202 09:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.202 09:59:28 -- target/dif.sh@73 -- # cat 00:31:05.202 09:59:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:05.202 09:59:28 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.202 09:59:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:05.202 09:59:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:05.202 09:59:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:05.202 { 00:31:05.202 "params": { 00:31:05.202 "name": "Nvme$subsystem", 00:31:05.202 "trtype": "$TEST_TRANSPORT", 00:31:05.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.202 "adrfam": "ipv4", 00:31:05.202 "trsvcid": "$NVMF_PORT", 00:31:05.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.202 "hdgst": ${hdgst:-false}, 00:31:05.202 "ddgst": ${ddgst:-false} 00:31:05.202 }, 00:31:05.202 "method": "bdev_nvme_attach_controller" 00:31:05.202 } 00:31:05.202 EOF 00:31:05.202 )") 00:31:05.202 09:59:28 -- target/dif.sh@72 -- # (( file++ )) 00:31:05.202 09:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.202 09:59:28 -- nvmf/common.sh@543 -- # cat 00:31:05.203 09:59:28 -- nvmf/common.sh@545 -- # jq . 00:31:05.203 09:59:28 -- nvmf/common.sh@546 -- # IFS=, 00:31:05.203 09:59:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:05.203 "params": { 00:31:05.203 "name": "Nvme0", 00:31:05.203 "trtype": "tcp", 00:31:05.203 "traddr": "10.0.0.2", 00:31:05.203 "adrfam": "ipv4", 00:31:05.203 "trsvcid": "4420", 00:31:05.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.203 "hdgst": false, 00:31:05.203 "ddgst": false 00:31:05.203 }, 00:31:05.203 "method": "bdev_nvme_attach_controller" 00:31:05.203 },{ 00:31:05.203 "params": { 00:31:05.203 "name": "Nvme1", 00:31:05.203 "trtype": "tcp", 00:31:05.203 "traddr": "10.0.0.2", 00:31:05.203 "adrfam": "ipv4", 00:31:05.203 "trsvcid": "4420", 00:31:05.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.203 "hdgst": false, 00:31:05.203 "ddgst": false 00:31:05.203 }, 00:31:05.203 "method": "bdev_nvme_attach_controller" 00:31:05.203 }' 00:31:05.203 09:59:28 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:05.203 09:59:28 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:05.203 09:59:28 -- common/autotest_common.sh@1333 -- # break 00:31:05.203 09:59:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:05.203 09:59:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.462 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:05.462 ... 00:31:05.462 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:05.462 ... 00:31:05.462 fio-3.35 00:31:05.462 Starting 4 threads 00:31:12.027 00:31:12.027 filename0: (groupid=0, jobs=1): err= 0: pid=83291: Thu Apr 18 09:59:34 2024 00:31:12.027 read: IOPS=1351, BW=10.6MiB/s (11.1MB/s)(52.8MiB/5004msec) 00:31:12.027 slat (nsec): min=4426, max=63897, avg=14490.88, stdev=5065.71 00:31:12.027 clat (usec): min=1936, max=16236, avg=5859.17, stdev=775.29 00:31:12.027 lat (usec): min=1948, max=16268, avg=5873.66, stdev=775.38 00:31:12.027 clat percentiles (usec): 00:31:12.027 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 4686], 20.00th=[ 5932], 00:31:12.027 | 30.00th=[ 5997], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6063], 00:31:12.027 | 70.00th=[ 6063], 80.00th=[ 6128], 90.00th=[ 6194], 95.00th=[ 6325], 00:31:12.027 | 99.00th=[ 7570], 99.50th=[ 8029], 99.90th=[11600], 99.95th=[11600], 00:31:12.027 | 99.99th=[16188] 00:31:12.027 bw ( KiB/s): min=10368, max=12048, per=20.37%, avg=10814.40, stdev=668.41, samples=10 00:31:12.027 iops : min= 1296, max= 1506, avg=1351.80, stdev=83.55, samples=10 00:31:12.027 lat (msec) : 2=0.10%, 4=1.11%, 10=98.67%, 20=0.12% 00:31:12.027 cpu : usr=91.67%, sys=7.44%, ctx=49, majf=0, minf=1075 00:31:12.027 IO depths : 1=0.1%, 2=20.0%, 4=54.0%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.027 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:12.027 filename0: (groupid=0, jobs=1): err= 0: pid=83292: Thu Apr 18 09:59:34 2024 00:31:12.027 read: IOPS=1761, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5003msec) 00:31:12.027 slat (nsec): min=5525, max=72995, avg=18530.48, stdev=4620.91 00:31:12.027 clat (usec): min=1579, max=15333, avg=4496.97, stdev=1340.88 00:31:12.027 lat (usec): min=1601, max=15355, avg=4515.50, stdev=1340.08 00:31:12.027 clat percentiles (usec): 00:31:12.027 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 3294], 00:31:12.027 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 4113], 60.00th=[ 5342], 00:31:12.027 | 70.00th=[ 5800], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6128], 00:31:12.027 | 99.00th=[ 6652], 99.50th=[ 7898], 99.90th=[ 8455], 99.95th=[10683], 00:31:12.027 | 99.99th=[15270] 00:31:12.027 bw ( KiB/s): min=12120, max=14432, per=26.54%, avg=14092.00, stdev=700.20, samples=10 00:31:12.027 iops : min= 1515, max= 1804, avg=1761.50, stdev=87.53, samples=10 00:31:12.027 lat (msec) : 2=0.36%, 4=45.72%, 10=53.82%, 20=0.09% 00:31:12.027 cpu : usr=91.74%, sys=7.18%, ctx=38, majf=0, minf=1074 00:31:12.027 IO depths : 1=0.1%, 2=0.1%, 4=64.7%, 8=35.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 issued rwts: total=8814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.027 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:12.027 filename1: (groupid=0, jobs=1): err= 0: pid=83293: Thu Apr 18 09:59:34 2024 00:31:12.027 read: IOPS=1762, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5002msec) 00:31:12.027 slat (nsec): min=5531, max=57915, avg=17660.17, stdev=4572.65 00:31:12.027 clat (usec): min=1599, max=13947, avg=4496.80, stdev=1331.22 00:31:12.027 lat (usec): min=1610, max=13980, avg=4514.46, stdev=1331.45 00:31:12.027 clat percentiles (usec): 00:31:12.027 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 3294], 00:31:12.027 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 4113], 60.00th=[ 5342], 00:31:12.027 | 70.00th=[ 5800], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6128], 00:31:12.027 | 99.00th=[ 6652], 99.50th=[ 7898], 99.90th=[ 8455], 99.95th=[ 9241], 00:31:12.027 | 99.99th=[13960] 00:31:12.027 bw ( KiB/s): min=12192, max=14432, per=26.54%, avg=14092.44, stdev=719.23, samples=9 00:31:12.027 iops : min= 1524, max= 1804, avg=1761.56, stdev=89.90, samples=9 00:31:12.027 lat (msec) : 2=0.28%, 4=45.79%, 10=53.92%, 20=0.01% 00:31:12.027 cpu : usr=91.24%, sys=7.70%, ctx=6, majf=0, minf=1076 00:31:12.027 IO depths : 1=0.1%, 2=0.1%, 4=64.7%, 8=35.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 issued rwts: total=8817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.027 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:12.027 filename1: (groupid=0, jobs=1): err= 0: pid=83294: Thu Apr 18 09:59:34 2024 00:31:12.027 read: IOPS=1763, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5001msec) 00:31:12.027 slat (nsec): min=5651, max=52513, avg=18385.31, stdev=4933.11 00:31:12.027 clat (usec): min=593, max=12885, avg=4492.10, stdev=1333.47 00:31:12.027 lat (usec): min=610, max=12915, avg=4510.48, stdev=1332.29 00:31:12.027 clat percentiles (usec): 00:31:12.027 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2540], 20.00th=[ 3261], 00:31:12.027 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 4113], 60.00th=[ 5342], 00:31:12.027 | 70.00th=[ 5800], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6128], 00:31:12.027 | 99.00th=[ 6652], 99.50th=[ 7898], 99.90th=[ 8225], 99.95th=[ 8291], 00:31:12.027 | 99.99th=[12911] 00:31:12.027 bw ( KiB/s): min=12216, max=14432, per=26.55%, avg=14095.11, stdev=711.31, samples=9 00:31:12.027 iops : min= 1527, max= 1804, avg=1761.89, stdev=88.91, samples=9 00:31:12.027 lat (usec) : 750=0.01% 00:31:12.027 lat (msec) : 2=0.37%, 4=45.70%, 10=53.90%, 20=0.01% 00:31:12.027 cpu : usr=91.62%, sys=7.28%, ctx=9, majf=0, minf=1076 00:31:12.027 IO depths : 1=0.1%, 2=0.1%, 4=64.7%, 8=35.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.027 issued rwts: total=8818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.027 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:12.027 00:31:12.027 Run status group 0 (all jobs): 00:31:12.027 READ: bw=51.9MiB/s (54.4MB/s), 10.6MiB/s-13.8MiB/s (11.1MB/s-14.4MB/s), io=259MiB (272MB), run=5001-5004msec 00:31:12.595 ----------------------------------------------------- 00:31:12.595 Suppressions used: 00:31:12.595 count bytes template 00:31:12.595 6 52 /usr/src/fio/parse.c 00:31:12.595 1 8 libtcmalloc_minimal.so 00:31:12.595 1 904 libcrypto.so 00:31:12.595 ----------------------------------------------------- 00:31:12.595 00:31:12.595 09:59:35 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:12.595 09:59:35 -- target/dif.sh@43 -- # local sub 00:31:12.595 09:59:35 -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.595 09:59:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:12.595 09:59:35 -- target/dif.sh@36 -- # local sub_id=0 00:31:12.595 09:59:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.595 09:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.595 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:31:12.595 09:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.595 09:59:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:12.595 09:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.595 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:31:12.595 09:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.595 09:59:35 -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.595 09:59:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:12.595 09:59:35 -- target/dif.sh@36 -- # local sub_id=1 00:31:12.595 09:59:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.595 09:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.595 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:31:12.595 09:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.595 09:59:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:12.595 09:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.595 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:31:12.595 ************************************ 00:31:12.595 END TEST fio_dif_rand_params 00:31:12.595 ************************************ 00:31:12.595 09:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.595 00:31:12.595 real 0m27.813s 00:31:12.595 user 2m8.307s 00:31:12.595 sys 0m8.901s 00:31:12.595 09:59:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:12.595 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:31:12.595 09:59:35 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:12.595 09:59:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:12.595 09:59:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:12.595 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:31:12.854 ************************************ 00:31:12.854 START TEST fio_dif_digest 00:31:12.854 ************************************ 00:31:12.854 09:59:36 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:31:12.854 09:59:36 -- target/dif.sh@123 -- # local NULL_DIF 00:31:12.854 09:59:36 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:12.854 09:59:36 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:12.854 09:59:36 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:12.854 09:59:36 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:12.854 09:59:36 -- target/dif.sh@127 -- # numjobs=3 00:31:12.854 09:59:36 -- target/dif.sh@127 -- # iodepth=3 00:31:12.854 09:59:36 -- target/dif.sh@127 -- # runtime=10 00:31:12.854 09:59:36 -- target/dif.sh@128 -- # hdgst=true 00:31:12.854 09:59:36 -- target/dif.sh@128 -- # ddgst=true 00:31:12.854 09:59:36 -- target/dif.sh@130 -- # create_subsystems 0 00:31:12.854 09:59:36 -- target/dif.sh@28 -- # local sub 00:31:12.854 09:59:36 -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.854 09:59:36 -- target/dif.sh@31 -- # create_subsystem 0 00:31:12.854 09:59:36 -- target/dif.sh@18 -- # local sub_id=0 00:31:12.854 09:59:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:12.854 09:59:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.854 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:31:12.854 bdev_null0 00:31:12.854 09:59:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.854 09:59:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:12.854 09:59:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.854 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:31:12.854 09:59:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.854 09:59:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:12.854 09:59:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.854 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:31:12.854 09:59:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.854 09:59:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.854 09:59:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.854 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:31:12.854 [2024-04-18 09:59:36.063588] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.854 09:59:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.854 09:59:36 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:12.854 09:59:36 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:12.854 09:59:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:12.854 09:59:36 -- nvmf/common.sh@521 -- # config=() 00:31:12.854 09:59:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.854 09:59:36 -- nvmf/common.sh@521 -- # local subsystem config 00:31:12.854 09:59:36 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.854 09:59:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:12.854 09:59:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:12.854 09:59:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:12.854 { 00:31:12.854 "params": { 00:31:12.854 "name": "Nvme$subsystem", 00:31:12.854 "trtype": "$TEST_TRANSPORT", 00:31:12.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.854 "adrfam": "ipv4", 00:31:12.854 "trsvcid": "$NVMF_PORT", 00:31:12.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.854 "hdgst": ${hdgst:-false}, 00:31:12.854 "ddgst": ${ddgst:-false} 00:31:12.854 }, 00:31:12.854 "method": "bdev_nvme_attach_controller" 00:31:12.854 } 00:31:12.854 EOF 00:31:12.854 )") 00:31:12.854 09:59:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.854 09:59:36 -- target/dif.sh@82 -- # gen_fio_conf 00:31:12.854 09:59:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:12.854 09:59:36 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:12.854 09:59:36 -- target/dif.sh@54 -- # local file 00:31:12.854 09:59:36 -- common/autotest_common.sh@1327 -- # shift 00:31:12.854 09:59:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:12.854 09:59:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.854 09:59:36 -- target/dif.sh@56 -- # cat 00:31:12.854 09:59:36 -- nvmf/common.sh@543 -- # cat 00:31:12.854 09:59:36 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:12.854 09:59:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:12.854 09:59:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:12.854 09:59:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:12.854 09:59:36 -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.854 09:59:36 -- nvmf/common.sh@545 -- # jq . 00:31:12.854 09:59:36 -- nvmf/common.sh@546 -- # IFS=, 00:31:12.854 09:59:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:12.854 "params": { 00:31:12.854 "name": "Nvme0", 00:31:12.854 "trtype": "tcp", 00:31:12.854 "traddr": "10.0.0.2", 00:31:12.854 "adrfam": "ipv4", 00:31:12.854 "trsvcid": "4420", 00:31:12.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.854 "hdgst": true, 00:31:12.854 "ddgst": true 00:31:12.854 }, 00:31:12.854 "method": "bdev_nvme_attach_controller" 00:31:12.854 }' 00:31:12.854 09:59:36 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:12.854 09:59:36 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:12.854 09:59:36 -- common/autotest_common.sh@1333 -- # break 00:31:12.854 09:59:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:12.854 09:59:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.113 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:13.113 ... 00:31:13.113 fio-3.35 00:31:13.113 Starting 3 threads 00:31:25.330 00:31:25.330 filename0: (groupid=0, jobs=1): err= 0: pid=83414: Thu Apr 18 09:59:47 2024 00:31:25.330 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10001msec) 00:31:25.330 slat (nsec): min=4886, max=72076, avg=17802.62, stdev=11514.23 00:31:25.330 clat (usec): min=15524, max=19426, avg=15877.50, stdev=272.15 00:31:25.330 lat (usec): min=15535, max=19446, avg=15895.31, stdev=273.93 00:31:25.330 clat percentiles (usec): 00:31:25.330 | 1.00th=[15533], 5.00th=[15664], 10.00th=[15664], 20.00th=[15664], 00:31:25.330 | 30.00th=[15795], 40.00th=[15795], 50.00th=[15795], 60.00th=[15926], 00:31:25.330 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:31:25.330 | 99.00th=[16712], 99.50th=[16909], 99.90th=[19530], 99.95th=[19530], 00:31:25.330 | 99.99th=[19530] 00:31:25.330 bw ( KiB/s): min=23808, max=24576, per=33.35%, avg=24131.37, stdev=389.57, samples=19 00:31:25.330 iops : min= 186, max= 192, avg=188.53, stdev= 3.04, samples=19 00:31:25.330 lat (msec) : 20=100.00% 00:31:25.330 cpu : usr=92.85%, sys=6.48%, ctx=18, majf=0, minf=1076 00:31:25.330 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.330 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.330 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.330 filename0: (groupid=0, jobs=1): err= 0: pid=83415: Thu Apr 18 09:59:47 2024 00:31:25.330 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(236MiB/10003msec) 00:31:25.330 slat (nsec): min=6082, max=45103, avg=14703.74, stdev=6085.12 00:31:25.330 clat (usec): min=5761, max=18821, avg=15865.18, stdev=502.97 00:31:25.330 lat (usec): min=5771, max=18842, avg=15879.89, stdev=503.29 00:31:25.330 clat percentiles (usec): 00:31:25.330 | 1.00th=[15533], 5.00th=[15664], 10.00th=[15664], 20.00th=[15664], 00:31:25.330 | 30.00th=[15795], 40.00th=[15795], 50.00th=[15795], 60.00th=[15926], 00:31:25.330 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:31:25.330 | 99.00th=[16909], 99.50th=[16909], 99.90th=[18744], 99.95th=[18744], 00:31:25.330 | 99.99th=[18744] 00:31:25.330 bw ( KiB/s): min=23808, max=24576, per=33.35%, avg=24131.37, stdev=389.57, samples=19 00:31:25.330 iops : min= 186, max= 192, avg=188.53, stdev= 3.04, samples=19 00:31:25.330 lat (msec) : 10=0.16%, 20=99.84% 00:31:25.330 cpu : usr=92.52%, sys=6.86%, ctx=9, majf=0, minf=1074 00:31:25.330 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.330 issued rwts: total=1887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.330 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.330 filename0: (groupid=0, jobs=1): err= 0: pid=83416: Thu Apr 18 09:59:47 2024 00:31:25.330 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10004msec) 00:31:25.330 slat (nsec): min=7389, max=57762, avg=15842.19, stdev=7308.48 00:31:25.330 clat (usec): min=14135, max=23605, avg=15888.68, stdev=393.38 00:31:25.330 lat (usec): min=14146, max=23650, avg=15904.52, stdev=393.87 00:31:25.330 clat percentiles (usec): 00:31:25.330 | 1.00th=[15533], 5.00th=[15664], 10.00th=[15664], 20.00th=[15664], 00:31:25.330 | 30.00th=[15795], 40.00th=[15795], 50.00th=[15795], 60.00th=[15926], 00:31:25.330 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:31:25.330 | 99.00th=[16712], 99.50th=[16909], 99.90th=[23725], 99.95th=[23725], 00:31:25.330 | 99.99th=[23725] 00:31:25.330 bw ( KiB/s): min=23040, max=24576, per=33.29%, avg=24090.95, stdev=458.70, samples=19 00:31:25.330 iops : min= 180, max= 192, avg=188.21, stdev= 3.58, samples=19 00:31:25.330 lat (msec) : 20=99.84%, 50=0.16% 00:31:25.330 cpu : usr=92.92%, sys=6.42%, ctx=16, majf=0, minf=1073 00:31:25.330 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.330 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.330 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.330 00:31:25.330 Run status group 0 (all jobs): 00:31:25.330 READ: bw=70.7MiB/s (74.1MB/s), 23.5MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=707MiB (741MB), run=10001-10004msec 00:31:25.330 ----------------------------------------------------- 00:31:25.330 Suppressions used: 00:31:25.330 count bytes template 00:31:25.330 5 44 /usr/src/fio/parse.c 00:31:25.330 1 8 libtcmalloc_minimal.so 00:31:25.330 1 904 libcrypto.so 00:31:25.331 ----------------------------------------------------- 00:31:25.331 00:31:25.331 09:59:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:25.331 09:59:48 -- target/dif.sh@43 -- # local sub 00:31:25.331 09:59:48 -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.331 09:59:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:25.331 09:59:48 -- target/dif.sh@36 -- # local sub_id=0 00:31:25.331 09:59:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.331 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:25.331 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:31:25.331 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:25.331 09:59:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:25.331 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:25.331 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:31:25.331 ************************************ 00:31:25.331 END TEST fio_dif_digest 00:31:25.331 ************************************ 00:31:25.331 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:25.331 00:31:25.331 real 0m12.584s 00:31:25.331 user 0m29.924s 00:31:25.331 sys 0m2.436s 00:31:25.331 09:59:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.331 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:31:25.331 09:59:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:25.331 09:59:48 -- target/dif.sh@147 -- # nvmftestfini 00:31:25.331 09:59:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:25.331 09:59:48 -- nvmf/common.sh@117 -- # sync 00:31:25.331 09:59:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:25.331 09:59:48 -- nvmf/common.sh@120 -- # set +e 00:31:25.331 09:59:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:25.331 09:59:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:25.331 rmmod nvme_tcp 00:31:25.331 rmmod nvme_fabrics 00:31:25.331 rmmod nvme_keyring 00:31:25.589 09:59:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:25.589 09:59:48 -- nvmf/common.sh@124 -- # set -e 00:31:25.589 09:59:48 -- nvmf/common.sh@125 -- # return 0 00:31:25.589 09:59:48 -- nvmf/common.sh@478 -- # '[' -n 82617 ']' 00:31:25.589 09:59:48 -- nvmf/common.sh@479 -- # killprocess 82617 00:31:25.589 09:59:48 -- common/autotest_common.sh@936 -- # '[' -z 82617 ']' 00:31:25.589 09:59:48 -- common/autotest_common.sh@940 -- # kill -0 82617 00:31:25.589 09:59:48 -- common/autotest_common.sh@941 -- # uname 00:31:25.589 09:59:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:25.589 09:59:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82617 00:31:25.589 killing process with pid 82617 00:31:25.589 09:59:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:25.590 09:59:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:25.590 09:59:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82617' 00:31:25.590 09:59:48 -- common/autotest_common.sh@955 -- # kill 82617 00:31:25.590 09:59:48 -- common/autotest_common.sh@960 -- # wait 82617 00:31:26.965 09:59:49 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:26.965 09:59:49 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:26.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:26.965 Waiting for block devices as requested 00:31:27.224 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:27.224 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:27.483 09:59:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:27.483 09:59:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:27.483 09:59:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:27.483 09:59:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:27.483 09:59:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.483 09:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:27.483 09:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.483 09:59:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:27.483 ************************************ 00:31:27.483 END TEST nvmf_dif 00:31:27.483 ************************************ 00:31:27.483 00:31:27.483 real 1m9.903s 00:31:27.483 user 4m7.513s 00:31:27.483 sys 0m19.768s 00:31:27.483 09:59:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:27.483 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.483 09:59:50 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:27.483 09:59:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:27.483 09:59:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:27.483 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.483 ************************************ 00:31:27.483 START TEST nvmf_abort_qd_sizes 00:31:27.483 ************************************ 00:31:27.483 09:59:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:27.483 * Looking for test storage... 00:31:27.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:27.483 09:59:50 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:27.483 09:59:50 -- nvmf/common.sh@7 -- # uname -s 00:31:27.483 09:59:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.483 09:59:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.483 09:59:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.483 09:59:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.483 09:59:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.483 09:59:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.483 09:59:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.483 09:59:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.483 09:59:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.483 09:59:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.483 09:59:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:31:27.483 09:59:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:31:27.483 09:59:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.483 09:59:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.483 09:59:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:27.483 09:59:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.483 09:59:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:27.483 09:59:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.483 09:59:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.483 09:59:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.483 09:59:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.483 09:59:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.483 09:59:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.483 09:59:50 -- paths/export.sh@5 -- # export PATH 00:31:27.742 09:59:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.742 09:59:50 -- nvmf/common.sh@47 -- # : 0 00:31:27.742 09:59:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.742 09:59:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.742 09:59:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.742 09:59:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.742 09:59:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.742 09:59:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.742 09:59:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.742 09:59:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.742 09:59:50 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:27.742 09:59:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:27.742 09:59:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.742 09:59:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:27.742 09:59:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:27.742 09:59:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:27.742 09:59:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.742 09:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:27.742 09:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.742 09:59:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:27.742 09:59:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:27.742 09:59:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:27.742 09:59:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:27.742 09:59:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:27.742 09:59:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:27.742 09:59:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.742 09:59:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.742 09:59:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:27.742 09:59:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:27.742 09:59:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:27.742 09:59:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:27.742 09:59:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:27.742 09:59:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.742 09:59:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:27.742 09:59:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:27.742 09:59:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:27.742 09:59:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:27.742 09:59:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:27.742 09:59:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:27.742 Cannot find device "nvmf_tgt_br" 00:31:27.742 09:59:50 -- nvmf/common.sh@155 -- # true 00:31:27.742 09:59:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:27.742 Cannot find device "nvmf_tgt_br2" 00:31:27.742 09:59:50 -- nvmf/common.sh@156 -- # true 00:31:27.742 09:59:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:27.742 09:59:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:27.742 Cannot find device "nvmf_tgt_br" 00:31:27.742 09:59:50 -- nvmf/common.sh@158 -- # true 00:31:27.742 09:59:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:27.742 Cannot find device "nvmf_tgt_br2" 00:31:27.742 09:59:50 -- nvmf/common.sh@159 -- # true 00:31:27.742 09:59:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:27.742 09:59:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:27.742 09:59:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:27.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:27.742 09:59:51 -- nvmf/common.sh@162 -- # true 00:31:27.742 09:59:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:27.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:27.742 09:59:51 -- nvmf/common.sh@163 -- # true 00:31:27.742 09:59:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:27.742 09:59:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:27.742 09:59:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:27.742 09:59:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:27.742 09:59:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:27.742 09:59:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:27.742 09:59:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:27.742 09:59:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:27.742 09:59:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:27.742 09:59:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:28.001 09:59:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:28.001 09:59:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:28.001 09:59:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:28.001 09:59:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:28.001 09:59:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:28.001 09:59:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:28.001 09:59:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:28.001 09:59:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:28.001 09:59:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:28.001 09:59:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:28.001 09:59:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:28.001 09:59:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:28.001 09:59:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:28.001 09:59:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:28.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:31:28.001 00:31:28.001 --- 10.0.0.2 ping statistics --- 00:31:28.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.001 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:31:28.001 09:59:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:28.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:28.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:31:28.001 00:31:28.001 --- 10.0.0.3 ping statistics --- 00:31:28.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.001 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:28.001 09:59:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:28.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:31:28.001 00:31:28.001 --- 10.0.0.1 ping statistics --- 00:31:28.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.001 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:31:28.001 09:59:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.001 09:59:51 -- nvmf/common.sh@422 -- # return 0 00:31:28.001 09:59:51 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:28.001 09:59:51 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:28.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:28.880 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:28.880 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:28.880 09:59:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.880 09:59:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:28.880 09:59:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:28.880 09:59:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.880 09:59:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:28.880 09:59:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:28.880 09:59:52 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:28.880 09:59:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:28.880 09:59:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:28.880 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:31:28.880 09:59:52 -- nvmf/common.sh@470 -- # nvmfpid=84029 00:31:28.880 09:59:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:28.880 09:59:52 -- nvmf/common.sh@471 -- # waitforlisten 84029 00:31:28.880 09:59:52 -- common/autotest_common.sh@817 -- # '[' -z 84029 ']' 00:31:28.880 09:59:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.880 09:59:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:28.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.880 09:59:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.880 09:59:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:28.880 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:31:28.880 [2024-04-18 09:59:52.254177] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:28.880 [2024-04-18 09:59:52.254349] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.138 [2024-04-18 09:59:52.433181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.397 [2024-04-18 09:59:52.719838] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.397 [2024-04-18 09:59:52.719901] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.397 [2024-04-18 09:59:52.719938] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.397 [2024-04-18 09:59:52.719951] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.397 [2024-04-18 09:59:52.719965] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.397 [2024-04-18 09:59:52.720215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.397 [2024-04-18 09:59:52.720310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.397 [2024-04-18 09:59:52.720747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.397 [2024-04-18 09:59:52.720753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.967 09:59:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:29.967 09:59:53 -- common/autotest_common.sh@850 -- # return 0 00:31:29.967 09:59:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:29.967 09:59:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:29.967 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:29.967 09:59:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:29.967 09:59:53 -- scripts/common.sh@309 -- # local bdf bdfs 00:31:29.967 09:59:53 -- scripts/common.sh@310 -- # local nvmes 00:31:29.967 09:59:53 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:31:29.967 09:59:53 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:31:29.967 09:59:53 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:31:29.967 09:59:53 -- scripts/common.sh@295 -- # local bdf= 00:31:29.967 09:59:53 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:31:29.967 09:59:53 -- scripts/common.sh@230 -- # local class 00:31:29.967 09:59:53 -- scripts/common.sh@231 -- # local subclass 00:31:29.967 09:59:53 -- scripts/common.sh@232 -- # local progif 00:31:29.967 09:59:53 -- scripts/common.sh@233 -- # printf %02x 1 00:31:29.967 09:59:53 -- scripts/common.sh@233 -- # class=01 00:31:29.967 09:59:53 -- scripts/common.sh@234 -- # printf %02x 8 00:31:29.967 09:59:53 -- scripts/common.sh@234 -- # subclass=08 00:31:29.967 09:59:53 -- scripts/common.sh@235 -- # printf %02x 2 00:31:29.967 09:59:53 -- scripts/common.sh@235 -- # progif=02 00:31:29.967 09:59:53 -- scripts/common.sh@237 -- # hash lspci 00:31:29.967 09:59:53 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:31:29.967 09:59:53 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:31:29.967 09:59:53 -- scripts/common.sh@240 -- # grep -i -- -p02 00:31:29.967 09:59:53 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:31:29.967 09:59:53 -- scripts/common.sh@242 -- # tr -d '"' 00:31:29.967 09:59:53 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:29.967 09:59:53 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:31:29.967 09:59:53 -- scripts/common.sh@15 -- # local i 00:31:29.967 09:59:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:31:29.967 09:59:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:29.967 09:59:53 -- scripts/common.sh@24 -- # return 0 00:31:29.967 09:59:53 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:31:29.967 09:59:53 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:29.967 09:59:53 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:31:29.967 09:59:53 -- scripts/common.sh@15 -- # local i 00:31:29.967 09:59:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:31:29.967 09:59:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:29.967 09:59:53 -- scripts/common.sh@24 -- # return 0 00:31:29.967 09:59:53 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:31:29.967 09:59:53 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:29.967 09:59:53 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:31:29.967 09:59:53 -- scripts/common.sh@320 -- # uname -s 00:31:29.967 09:59:53 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:29.967 09:59:53 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:29.967 09:59:53 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:29.967 09:59:53 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:31:29.967 09:59:53 -- scripts/common.sh@320 -- # uname -s 00:31:29.967 09:59:53 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:29.967 09:59:53 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:29.967 09:59:53 -- scripts/common.sh@325 -- # (( 2 )) 00:31:29.967 09:59:53 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:29.967 09:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:29.967 09:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:29.967 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:29.967 ************************************ 00:31:29.967 START TEST spdk_target_abort 00:31:29.967 ************************************ 00:31:29.967 09:59:53 -- common/autotest_common.sh@1111 -- # spdk_target 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:29.967 09:59:53 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:31:29.967 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.967 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:30.226 spdk_targetn1 00:31:30.226 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.226 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.226 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:30.226 [2024-04-18 09:59:53.413262] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.226 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:30.226 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.226 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:30.226 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:30.226 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.226 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:30.226 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:30.226 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.226 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:31:30.226 [2024-04-18 09:59:53.449534] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.226 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.226 09:59:53 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:33.510 Initializing NVMe Controllers 00:31:33.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:33.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:33.510 Initialization complete. Launching workers. 00:31:33.510 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8551, failed: 0 00:31:33.510 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1041, failed to submit 7510 00:31:33.510 success 743, unsuccess 298, failed 0 00:31:33.510 09:59:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:33.510 09:59:56 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.699 Initializing NVMe Controllers 00:31:37.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:37.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:37.699 Initialization complete. Launching workers. 00:31:37.699 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:31:37.699 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7716 00:31:37.699 success 358, unsuccess 806, failed 0 00:31:37.699 10:00:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:37.699 10:00:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.231 Initializing NVMe Controllers 00:31:40.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.231 Initialization complete. Launching workers. 00:31:40.231 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26696, failed: 0 00:31:40.231 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2147, failed to submit 24549 00:31:40.231 success 321, unsuccess 1826, failed 0 00:31:40.231 10:00:03 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:40.231 10:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:40.231 10:00:03 -- common/autotest_common.sh@10 -- # set +x 00:31:40.231 10:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:40.231 10:00:03 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:40.231 10:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:40.231 10:00:03 -- common/autotest_common.sh@10 -- # set +x 00:31:40.798 10:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:40.798 10:00:03 -- target/abort_qd_sizes.sh@61 -- # killprocess 84029 00:31:40.798 10:00:03 -- common/autotest_common.sh@936 -- # '[' -z 84029 ']' 00:31:40.798 10:00:03 -- common/autotest_common.sh@940 -- # kill -0 84029 00:31:40.798 10:00:03 -- common/autotest_common.sh@941 -- # uname 00:31:40.798 10:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:40.798 10:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84029 00:31:40.798 killing process with pid 84029 00:31:40.798 10:00:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:40.798 10:00:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:40.798 10:00:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84029' 00:31:40.798 10:00:04 -- common/autotest_common.sh@955 -- # kill 84029 00:31:40.798 10:00:04 -- common/autotest_common.sh@960 -- # wait 84029 00:31:41.736 ************************************ 00:31:41.736 END TEST spdk_target_abort 00:31:41.736 ************************************ 00:31:41.736 00:31:41.736 real 0m11.777s 00:31:41.736 user 0m46.149s 00:31:41.736 sys 0m2.331s 00:31:41.736 10:00:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:41.736 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:31:41.993 10:00:05 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:41.993 10:00:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:41.993 10:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:41.993 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:31:41.993 ************************************ 00:31:41.993 START TEST kernel_target_abort 00:31:41.993 ************************************ 00:31:41.993 10:00:05 -- common/autotest_common.sh@1111 -- # kernel_target 00:31:41.993 10:00:05 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:41.993 10:00:05 -- nvmf/common.sh@717 -- # local ip 00:31:41.993 10:00:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:41.993 10:00:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:41.993 10:00:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.993 10:00:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.993 10:00:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:41.993 10:00:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.993 10:00:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:41.993 10:00:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:41.993 10:00:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:41.993 10:00:05 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:41.994 10:00:05 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:41.994 10:00:05 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:31:41.994 10:00:05 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:41.994 10:00:05 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:41.994 10:00:05 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:41.994 10:00:05 -- nvmf/common.sh@628 -- # local block nvme 00:31:41.994 10:00:05 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:31:41.994 10:00:05 -- nvmf/common.sh@631 -- # modprobe nvmet 00:31:41.994 10:00:05 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:41.994 10:00:05 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:42.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:42.251 Waiting for block devices as requested 00:31:42.251 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:42.510 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:42.769 10:00:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:42.769 10:00:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:42.769 10:00:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:31:42.769 10:00:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:42.769 10:00:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:42.769 10:00:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:42.769 10:00:06 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:31:42.769 10:00:06 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:42.769 10:00:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:42.769 No valid GPT data, bailing 00:31:42.769 10:00:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:42.769 10:00:06 -- scripts/common.sh@391 -- # pt= 00:31:42.769 10:00:06 -- scripts/common.sh@392 -- # return 1 00:31:42.769 10:00:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:31:42.769 10:00:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:42.769 10:00:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:42.769 10:00:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:31:42.769 10:00:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:31:42.769 10:00:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:42.769 10:00:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:42.769 10:00:06 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:31:42.769 10:00:06 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:42.769 10:00:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:43.027 No valid GPT data, bailing 00:31:43.027 10:00:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:43.027 10:00:06 -- scripts/common.sh@391 -- # pt= 00:31:43.027 10:00:06 -- scripts/common.sh@392 -- # return 1 00:31:43.027 10:00:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:31:43.027 10:00:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:43.027 10:00:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:43.027 10:00:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:31:43.027 10:00:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:31:43.027 10:00:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:43.027 10:00:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:43.027 10:00:06 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:31:43.027 10:00:06 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:43.027 10:00:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:43.027 No valid GPT data, bailing 00:31:43.027 10:00:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:43.027 10:00:06 -- scripts/common.sh@391 -- # pt= 00:31:43.027 10:00:06 -- scripts/common.sh@392 -- # return 1 00:31:43.027 10:00:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:31:43.027 10:00:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:43.027 10:00:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:43.027 10:00:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:31:43.027 10:00:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:31:43.027 10:00:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:43.027 10:00:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:43.027 10:00:06 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:31:43.027 10:00:06 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:43.027 10:00:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:43.027 No valid GPT data, bailing 00:31:43.027 10:00:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:43.027 10:00:06 -- scripts/common.sh@391 -- # pt= 00:31:43.027 10:00:06 -- scripts/common.sh@392 -- # return 1 00:31:43.027 10:00:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:31:43.027 10:00:06 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:31:43.027 10:00:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:43.027 10:00:06 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:43.027 10:00:06 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:43.027 10:00:06 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:43.027 10:00:06 -- nvmf/common.sh@656 -- # echo 1 00:31:43.027 10:00:06 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:31:43.027 10:00:06 -- nvmf/common.sh@658 -- # echo 1 00:31:43.027 10:00:06 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:31:43.027 10:00:06 -- nvmf/common.sh@661 -- # echo tcp 00:31:43.027 10:00:06 -- nvmf/common.sh@662 -- # echo 4420 00:31:43.027 10:00:06 -- nvmf/common.sh@663 -- # echo ipv4 00:31:43.027 10:00:06 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:43.027 10:00:06 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d --hostid=e0c1fd61-2902-4c83-8893-eeb512aeb67d -a 10.0.0.1 -t tcp -s 4420 00:31:43.027 00:31:43.027 Discovery Log Number of Records 2, Generation counter 2 00:31:43.027 =====Discovery Log Entry 0====== 00:31:43.027 trtype: tcp 00:31:43.027 adrfam: ipv4 00:31:43.027 subtype: current discovery subsystem 00:31:43.027 treq: not specified, sq flow control disable supported 00:31:43.027 portid: 1 00:31:43.027 trsvcid: 4420 00:31:43.027 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:43.027 traddr: 10.0.0.1 00:31:43.027 eflags: none 00:31:43.027 sectype: none 00:31:43.027 =====Discovery Log Entry 1====== 00:31:43.027 trtype: tcp 00:31:43.027 adrfam: ipv4 00:31:43.027 subtype: nvme subsystem 00:31:43.027 treq: not specified, sq flow control disable supported 00:31:43.027 portid: 1 00:31:43.027 trsvcid: 4420 00:31:43.027 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:43.028 traddr: 10.0.0.1 00:31:43.028 eflags: none 00:31:43.028 sectype: none 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.028 10:00:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.334 Initializing NVMe Controllers 00:31:46.334 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.334 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.334 Initialization complete. Launching workers. 00:31:46.334 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24420, failed: 0 00:31:46.334 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24420, failed to submit 0 00:31:46.334 success 0, unsuccess 24420, failed 0 00:31:46.334 10:00:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.334 10:00:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:49.619 Initializing NVMe Controllers 00:31:49.619 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:49.619 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:49.619 Initialization complete. Launching workers. 00:31:49.619 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57022, failed: 0 00:31:49.619 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24528, failed to submit 32494 00:31:49.619 success 0, unsuccess 24528, failed 0 00:31:49.619 10:00:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:49.619 10:00:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.905 Initializing NVMe Controllers 00:31:52.905 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:52.905 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:52.905 Initialization complete. Launching workers. 00:31:52.905 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67560, failed: 0 00:31:52.905 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16912, failed to submit 50648 00:31:52.905 success 0, unsuccess 16912, failed 0 00:31:52.905 10:00:16 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:52.905 10:00:16 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:52.905 10:00:16 -- nvmf/common.sh@675 -- # echo 0 00:31:52.905 10:00:16 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.905 10:00:16 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.905 10:00:16 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:52.905 10:00:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.905 10:00:16 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:52.905 10:00:16 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:52.905 10:00:16 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:53.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:55.211 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:55.211 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:55.211 00:31:55.211 real 0m13.313s 00:31:55.211 user 0m7.021s 00:31:55.211 sys 0m4.049s 00:31:55.211 10:00:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:55.211 10:00:18 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 ************************************ 00:31:55.211 END TEST kernel_target_abort 00:31:55.211 ************************************ 00:31:55.211 10:00:18 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:55.211 10:00:18 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:55.211 10:00:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:55.211 10:00:18 -- nvmf/common.sh@117 -- # sync 00:31:55.211 10:00:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:55.211 10:00:18 -- nvmf/common.sh@120 -- # set +e 00:31:55.211 10:00:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:55.211 10:00:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:55.470 rmmod nvme_tcp 00:31:55.470 rmmod nvme_fabrics 00:31:55.470 rmmod nvme_keyring 00:31:55.470 10:00:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:55.470 10:00:18 -- nvmf/common.sh@124 -- # set -e 00:31:55.470 10:00:18 -- nvmf/common.sh@125 -- # return 0 00:31:55.470 10:00:18 -- nvmf/common.sh@478 -- # '[' -n 84029 ']' 00:31:55.470 10:00:18 -- nvmf/common.sh@479 -- # killprocess 84029 00:31:55.470 10:00:18 -- common/autotest_common.sh@936 -- # '[' -z 84029 ']' 00:31:55.470 10:00:18 -- common/autotest_common.sh@940 -- # kill -0 84029 00:31:55.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (84029) - No such process 00:31:55.470 Process with pid 84029 is not found 00:31:55.470 10:00:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 84029 is not found' 00:31:55.470 10:00:18 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:55.470 10:00:18 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:55.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:55.728 Waiting for block devices as requested 00:31:55.728 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:55.986 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:55.986 10:00:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:55.986 10:00:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:55.986 10:00:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.986 10:00:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.986 10:00:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.986 10:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.986 10:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.986 10:00:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:55.986 00:31:55.986 real 0m28.503s 00:31:55.986 user 0m54.417s 00:31:55.986 sys 0m7.780s 00:31:55.986 10:00:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:55.986 10:00:19 -- common/autotest_common.sh@10 -- # set +x 00:31:55.986 ************************************ 00:31:55.986 END TEST nvmf_abort_qd_sizes 00:31:55.986 ************************************ 00:31:55.986 10:00:19 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:55.986 10:00:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:55.986 10:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:55.986 10:00:19 -- common/autotest_common.sh@10 -- # set +x 00:31:56.244 ************************************ 00:31:56.244 START TEST keyring_file 00:31:56.244 ************************************ 00:31:56.244 10:00:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:56.244 * Looking for test storage... 00:31:56.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:56.244 10:00:19 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:56.244 10:00:19 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:56.244 10:00:19 -- nvmf/common.sh@7 -- # uname -s 00:31:56.244 10:00:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.244 10:00:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.244 10:00:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.244 10:00:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.244 10:00:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.244 10:00:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.244 10:00:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.244 10:00:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.244 10:00:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.244 10:00:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.244 10:00:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:31:56.244 10:00:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0c1fd61-2902-4c83-8893-eeb512aeb67d 00:31:56.244 10:00:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.244 10:00:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.244 10:00:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:56.244 10:00:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.244 10:00:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:56.244 10:00:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.244 10:00:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.244 10:00:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.244 10:00:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.244 10:00:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.244 10:00:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.244 10:00:19 -- paths/export.sh@5 -- # export PATH 00:31:56.244 10:00:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.244 10:00:19 -- nvmf/common.sh@47 -- # : 0 00:31:56.244 10:00:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:56.244 10:00:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:56.244 10:00:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.244 10:00:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.244 10:00:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.244 10:00:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:56.244 10:00:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:56.244 10:00:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:56.244 10:00:19 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:56.244 10:00:19 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:56.244 10:00:19 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:56.244 10:00:19 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:56.244 10:00:19 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:56.244 10:00:19 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:56.244 10:00:19 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:56.244 10:00:19 -- keyring/common.sh@15 -- # local name key digest path 00:31:56.244 10:00:19 -- keyring/common.sh@17 -- # name=key0 00:31:56.244 10:00:19 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:56.244 10:00:19 -- keyring/common.sh@17 -- # digest=0 00:31:56.244 10:00:19 -- keyring/common.sh@18 -- # mktemp 00:31:56.244 10:00:19 -- keyring/common.sh@18 -- # path=/tmp/tmp.YsPlszKRWb 00:31:56.244 10:00:19 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:56.244 10:00:19 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:56.244 10:00:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:56.244 10:00:19 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:56.244 10:00:19 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:56.244 10:00:19 -- nvmf/common.sh@693 -- # digest=0 00:31:56.244 10:00:19 -- nvmf/common.sh@694 -- # python - 00:31:56.244 10:00:19 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YsPlszKRWb 00:31:56.244 10:00:19 -- keyring/common.sh@23 -- # echo /tmp/tmp.YsPlszKRWb 00:31:56.244 10:00:19 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YsPlszKRWb 00:31:56.244 10:00:19 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:56.244 10:00:19 -- keyring/common.sh@15 -- # local name key digest path 00:31:56.244 10:00:19 -- keyring/common.sh@17 -- # name=key1 00:31:56.244 10:00:19 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:56.244 10:00:19 -- keyring/common.sh@17 -- # digest=0 00:31:56.244 10:00:19 -- keyring/common.sh@18 -- # mktemp 00:31:56.244 10:00:19 -- keyring/common.sh@18 -- # path=/tmp/tmp.tdFCfxSB47 00:31:56.244 10:00:19 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:56.244 10:00:19 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:56.244 10:00:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:56.244 10:00:19 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:56.244 10:00:19 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:31:56.244 10:00:19 -- nvmf/common.sh@693 -- # digest=0 00:31:56.244 10:00:19 -- nvmf/common.sh@694 -- # python - 00:31:56.244 10:00:19 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tdFCfxSB47 00:31:56.244 10:00:19 -- keyring/common.sh@23 -- # echo /tmp/tmp.tdFCfxSB47 00:31:56.244 10:00:19 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tdFCfxSB47 00:31:56.244 10:00:19 -- keyring/file.sh@30 -- # tgtpid=85028 00:31:56.244 10:00:19 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:56.244 10:00:19 -- keyring/file.sh@32 -- # waitforlisten 85028 00:31:56.244 10:00:19 -- common/autotest_common.sh@817 -- # '[' -z 85028 ']' 00:31:56.244 10:00:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.244 10:00:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:56.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.244 10:00:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.244 10:00:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:56.245 10:00:19 -- common/autotest_common.sh@10 -- # set +x 00:31:56.501 [2024-04-18 10:00:19.748324] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:56.501 [2024-04-18 10:00:19.748514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85028 ] 00:31:56.758 [2024-04-18 10:00:19.916426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.016 [2024-04-18 10:00:20.184810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.950 10:00:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:57.950 10:00:21 -- common/autotest_common.sh@850 -- # return 0 00:31:57.950 10:00:21 -- keyring/file.sh@33 -- # rpc_cmd 00:31:57.950 10:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.950 10:00:21 -- common/autotest_common.sh@10 -- # set +x 00:31:57.950 [2024-04-18 10:00:21.086193] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.950 null0 00:31:57.950 [2024-04-18 10:00:21.118600] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:57.950 [2024-04-18 10:00:21.119067] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:57.950 [2024-04-18 10:00:21.126636] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:57.950 10:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.950 10:00:21 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:57.950 10:00:21 -- common/autotest_common.sh@638 -- # local es=0 00:31:57.950 10:00:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:57.950 10:00:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:57.950 10:00:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:57.950 10:00:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:57.950 10:00:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:57.950 10:00:21 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:57.950 10:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.950 10:00:21 -- common/autotest_common.sh@10 -- # set +x 00:31:57.950 [2024-04-18 10:00:21.138554] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:31:57.950 { 00:31:57.950 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.950 "secure_channel": false, 00:31:57.950 "listen_address": { 00:31:57.950 "trtype": "tcp", 00:31:57.950 "traddr": "127.0.0.1", 00:31:57.950 "trsvcid": "4420" 00:31:57.950 }, 00:31:57.950 "method": "nvmf_subsystem_add_listener", 00:31:57.950 "req_id": 1 00:31:57.950 } 00:31:57.950 Got JSON-RPC error response 00:31:57.950 response: 00:31:57.950 { 00:31:57.950 "code": -32602, 00:31:57.950 "message": "Invalid parameters" 00:31:57.950 } 00:31:57.950 10:00:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:57.950 10:00:21 -- common/autotest_common.sh@641 -- # es=1 00:31:57.950 10:00:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:57.950 10:00:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:57.951 10:00:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:57.951 10:00:21 -- keyring/file.sh@46 -- # bperfpid=85049 00:31:57.951 10:00:21 -- keyring/file.sh@48 -- # waitforlisten 85049 /var/tmp/bperf.sock 00:31:57.951 10:00:21 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:57.951 10:00:21 -- common/autotest_common.sh@817 -- # '[' -z 85049 ']' 00:31:57.951 10:00:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:57.951 10:00:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:57.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:57.951 10:00:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:57.951 10:00:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:57.951 10:00:21 -- common/autotest_common.sh@10 -- # set +x 00:31:57.951 [2024-04-18 10:00:21.233111] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:57.951 [2024-04-18 10:00:21.233278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85049 ] 00:31:58.209 [2024-04-18 10:00:21.394255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.467 [2024-04-18 10:00:21.629731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.033 10:00:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:59.033 10:00:22 -- common/autotest_common.sh@850 -- # return 0 00:31:59.033 10:00:22 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:31:59.033 10:00:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:31:59.033 10:00:22 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tdFCfxSB47 00:31:59.033 10:00:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tdFCfxSB47 00:31:59.299 10:00:22 -- keyring/file.sh@51 -- # get_key key0 00:31:59.300 10:00:22 -- keyring/file.sh@51 -- # jq -r .path 00:31:59.300 10:00:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.300 10:00:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.300 10:00:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.559 10:00:22 -- keyring/file.sh@51 -- # [[ /tmp/tmp.YsPlszKRWb == \/\t\m\p\/\t\m\p\.\Y\s\P\l\s\z\K\R\W\b ]] 00:31:59.559 10:00:22 -- keyring/file.sh@52 -- # get_key key1 00:31:59.559 10:00:22 -- keyring/file.sh@52 -- # jq -r .path 00:31:59.560 10:00:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.560 10:00:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.560 10:00:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:59.820 10:00:23 -- keyring/file.sh@52 -- # [[ /tmp/tmp.tdFCfxSB47 == \/\t\m\p\/\t\m\p\.\t\d\F\C\f\x\S\B\4\7 ]] 00:31:59.820 10:00:23 -- keyring/file.sh@53 -- # get_refcnt key0 00:31:59.820 10:00:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:59.820 10:00:23 -- keyring/common.sh@12 -- # get_key key0 00:31:59.820 10:00:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.820 10:00:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.820 10:00:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.080 10:00:23 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:00.080 10:00:23 -- keyring/file.sh@54 -- # get_refcnt key1 00:32:00.080 10:00:23 -- keyring/common.sh@12 -- # get_key key1 00:32:00.080 10:00:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.080 10:00:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.080 10:00:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.080 10:00:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.339 10:00:23 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:00.339 10:00:23 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.339 10:00:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.615 [2024-04-18 10:00:23.792665] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:00.615 nvme0n1 00:32:00.615 10:00:23 -- keyring/file.sh@59 -- # get_refcnt key0 00:32:00.615 10:00:23 -- keyring/common.sh@12 -- # get_key key0 00:32:00.615 10:00:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.615 10:00:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.615 10:00:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.615 10:00:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.873 10:00:24 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:00.873 10:00:24 -- keyring/file.sh@60 -- # get_refcnt key1 00:32:00.873 10:00:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.873 10:00:24 -- keyring/common.sh@12 -- # get_key key1 00:32:00.873 10:00:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.873 10:00:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.873 10:00:24 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.137 10:00:24 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:01.137 10:00:24 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:01.137 Running I/O for 1 seconds... 00:32:02.515 00:32:02.515 Latency(us) 00:32:02.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.515 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:02.515 nvme0n1 : 1.01 7783.01 30.40 0.00 0.00 16356.60 9532.51 28359.21 00:32:02.515 =================================================================================================================== 00:32:02.515 Total : 7783.01 30.40 0.00 0.00 16356.60 9532.51 28359.21 00:32:02.515 0 00:32:02.516 10:00:25 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:02.516 10:00:25 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:02.516 10:00:25 -- keyring/file.sh@65 -- # get_refcnt key0 00:32:02.516 10:00:25 -- keyring/common.sh@12 -- # get_key key0 00:32:02.516 10:00:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.516 10:00:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.516 10:00:25 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.516 10:00:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.774 10:00:26 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:02.774 10:00:26 -- keyring/file.sh@66 -- # get_refcnt key1 00:32:02.774 10:00:26 -- keyring/common.sh@12 -- # get_key key1 00:32:02.774 10:00:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.774 10:00:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.774 10:00:26 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.774 10:00:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.035 10:00:26 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:03.035 10:00:26 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.035 10:00:26 -- common/autotest_common.sh@638 -- # local es=0 00:32:03.035 10:00:26 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.035 10:00:26 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:32:03.036 10:00:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.036 10:00:26 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:32:03.036 10:00:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.036 10:00:26 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.036 10:00:26 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.295 [2024-04-18 10:00:26.498636] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:03.295 [2024-04-18 10:00:26.499003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009840 (107): Transport endpoint is not connected 00:32:03.295 [2024-04-18 10:00:26.499972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009840 (9): Bad file descriptor 00:32:03.296 [2024-04-18 10:00:26.500966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:03.296 [2024-04-18 10:00:26.501000] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:03.296 [2024-04-18 10:00:26.501017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:03.296 request: 00:32:03.296 { 00:32:03.296 "name": "nvme0", 00:32:03.296 "trtype": "tcp", 00:32:03.296 "traddr": "127.0.0.1", 00:32:03.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:03.296 "adrfam": "ipv4", 00:32:03.296 "trsvcid": "4420", 00:32:03.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.296 "psk": "key1", 00:32:03.296 "method": "bdev_nvme_attach_controller", 00:32:03.296 "req_id": 1 00:32:03.296 } 00:32:03.296 Got JSON-RPC error response 00:32:03.296 response: 00:32:03.296 { 00:32:03.296 "code": -32602, 00:32:03.296 "message": "Invalid parameters" 00:32:03.296 } 00:32:03.296 10:00:26 -- common/autotest_common.sh@641 -- # es=1 00:32:03.296 10:00:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:03.296 10:00:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:03.296 10:00:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:03.296 10:00:26 -- keyring/file.sh@71 -- # get_refcnt key0 00:32:03.296 10:00:26 -- keyring/common.sh@12 -- # get_key key0 00:32:03.296 10:00:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.296 10:00:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.296 10:00:26 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.296 10:00:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.554 10:00:26 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:03.554 10:00:26 -- keyring/file.sh@72 -- # get_refcnt key1 00:32:03.554 10:00:26 -- keyring/common.sh@12 -- # get_key key1 00:32:03.554 10:00:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.554 10:00:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.554 10:00:26 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.554 10:00:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.811 10:00:27 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:03.811 10:00:27 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:03.811 10:00:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:04.069 10:00:27 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:04.069 10:00:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:04.328 10:00:27 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:04.328 10:00:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.328 10:00:27 -- keyring/file.sh@77 -- # jq length 00:32:04.586 10:00:27 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:04.586 10:00:27 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.YsPlszKRWb 00:32:04.586 10:00:27 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:32:04.586 10:00:27 -- common/autotest_common.sh@638 -- # local es=0 00:32:04.586 10:00:27 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:32:04.586 10:00:27 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:32:04.586 10:00:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:04.586 10:00:27 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:32:04.586 10:00:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:04.586 10:00:27 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:32:04.586 10:00:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:32:04.844 [2024-04-18 10:00:28.098998] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YsPlszKRWb': 0100660 00:32:04.844 [2024-04-18 10:00:28.099378] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:04.844 request: 00:32:04.844 { 00:32:04.844 "name": "key0", 00:32:04.844 "path": "/tmp/tmp.YsPlszKRWb", 00:32:04.844 "method": "keyring_file_add_key", 00:32:04.844 "req_id": 1 00:32:04.844 } 00:32:04.844 Got JSON-RPC error response 00:32:04.844 response: 00:32:04.844 { 00:32:04.844 "code": -1, 00:32:04.844 "message": "Operation not permitted" 00:32:04.844 } 00:32:04.844 10:00:28 -- common/autotest_common.sh@641 -- # es=1 00:32:04.844 10:00:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:04.844 10:00:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:04.844 10:00:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:04.844 10:00:28 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.YsPlszKRWb 00:32:04.844 10:00:28 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:32:04.844 10:00:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YsPlszKRWb 00:32:05.104 10:00:28 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.YsPlszKRWb 00:32:05.104 10:00:28 -- keyring/file.sh@88 -- # get_refcnt key0 00:32:05.104 10:00:28 -- keyring/common.sh@12 -- # get_key key0 00:32:05.104 10:00:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.104 10:00:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.104 10:00:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.104 10:00:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.363 10:00:28 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:05.363 10:00:28 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.363 10:00:28 -- common/autotest_common.sh@638 -- # local es=0 00:32:05.363 10:00:28 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.363 10:00:28 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:32:05.363 10:00:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:05.363 10:00:28 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:32:05.363 10:00:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:05.363 10:00:28 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.363 10:00:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.621 [2024-04-18 10:00:28.851228] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YsPlszKRWb': No such file or directory 00:32:05.622 [2024-04-18 10:00:28.851308] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:05.622 [2024-04-18 10:00:28.851342] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:05.622 [2024-04-18 10:00:28.851356] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:05.622 [2024-04-18 10:00:28.851371] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:05.622 request: 00:32:05.622 { 00:32:05.622 "name": "nvme0", 00:32:05.622 "trtype": "tcp", 00:32:05.622 "traddr": "127.0.0.1", 00:32:05.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.622 "adrfam": "ipv4", 00:32:05.622 "trsvcid": "4420", 00:32:05.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.622 "psk": "key0", 00:32:05.622 "method": "bdev_nvme_attach_controller", 00:32:05.622 "req_id": 1 00:32:05.622 } 00:32:05.622 Got JSON-RPC error response 00:32:05.622 response: 00:32:05.622 { 00:32:05.622 "code": -19, 00:32:05.622 "message": "No such device" 00:32:05.622 } 00:32:05.622 10:00:28 -- common/autotest_common.sh@641 -- # es=1 00:32:05.622 10:00:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:05.622 10:00:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:05.622 10:00:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:05.622 10:00:28 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:05.622 10:00:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:05.880 10:00:29 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:05.880 10:00:29 -- keyring/common.sh@15 -- # local name key digest path 00:32:05.880 10:00:29 -- keyring/common.sh@17 -- # name=key0 00:32:05.880 10:00:29 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:05.880 10:00:29 -- keyring/common.sh@17 -- # digest=0 00:32:05.880 10:00:29 -- keyring/common.sh@18 -- # mktemp 00:32:05.880 10:00:29 -- keyring/common.sh@18 -- # path=/tmp/tmp.pur33FR1gP 00:32:05.880 10:00:29 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:05.880 10:00:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:05.880 10:00:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:05.880 10:00:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:32:05.880 10:00:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:32:05.880 10:00:29 -- nvmf/common.sh@693 -- # digest=0 00:32:05.880 10:00:29 -- nvmf/common.sh@694 -- # python - 00:32:05.880 10:00:29 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pur33FR1gP 00:32:05.880 10:00:29 -- keyring/common.sh@23 -- # echo /tmp/tmp.pur33FR1gP 00:32:05.880 10:00:29 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.pur33FR1gP 00:32:05.880 10:00:29 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pur33FR1gP 00:32:05.880 10:00:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pur33FR1gP 00:32:06.138 10:00:29 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.138 10:00:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.395 nvme0n1 00:32:06.653 10:00:29 -- keyring/file.sh@99 -- # get_refcnt key0 00:32:06.653 10:00:29 -- keyring/common.sh@12 -- # get_key key0 00:32:06.653 10:00:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:06.653 10:00:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:06.653 10:00:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.653 10:00:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.911 10:00:30 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:06.911 10:00:30 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:06.911 10:00:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:06.911 10:00:30 -- keyring/file.sh@101 -- # get_key key0 00:32:06.911 10:00:30 -- keyring/file.sh@101 -- # jq -r .removed 00:32:06.911 10:00:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.911 10:00:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.911 10:00:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.170 10:00:30 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:07.170 10:00:30 -- keyring/file.sh@102 -- # get_refcnt key0 00:32:07.170 10:00:30 -- keyring/common.sh@12 -- # get_key key0 00:32:07.170 10:00:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.170 10:00:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.170 10:00:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.170 10:00:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.736 10:00:30 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:07.736 10:00:30 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:07.736 10:00:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:07.994 10:00:31 -- keyring/file.sh@104 -- # jq length 00:32:07.994 10:00:31 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:07.994 10:00:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.252 10:00:31 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:08.252 10:00:31 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pur33FR1gP 00:32:08.252 10:00:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pur33FR1gP 00:32:08.515 10:00:31 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tdFCfxSB47 00:32:08.515 10:00:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tdFCfxSB47 00:32:08.777 10:00:31 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:08.777 10:00:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:09.034 nvme0n1 00:32:09.034 10:00:32 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:09.034 10:00:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:09.293 10:00:32 -- keyring/file.sh@112 -- # config='{ 00:32:09.293 "subsystems": [ 00:32:09.293 { 00:32:09.293 "subsystem": "keyring", 00:32:09.293 "config": [ 00:32:09.293 { 00:32:09.293 "method": "keyring_file_add_key", 00:32:09.293 "params": { 00:32:09.293 "name": "key0", 00:32:09.293 "path": "/tmp/tmp.pur33FR1gP" 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "keyring_file_add_key", 00:32:09.293 "params": { 00:32:09.293 "name": "key1", 00:32:09.293 "path": "/tmp/tmp.tdFCfxSB47" 00:32:09.293 } 00:32:09.293 } 00:32:09.293 ] 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "subsystem": "iobuf", 00:32:09.293 "config": [ 00:32:09.293 { 00:32:09.293 "method": "iobuf_set_options", 00:32:09.293 "params": { 00:32:09.293 "small_pool_count": 8192, 00:32:09.293 "large_pool_count": 1024, 00:32:09.293 "small_bufsize": 8192, 00:32:09.293 "large_bufsize": 135168 00:32:09.293 } 00:32:09.293 } 00:32:09.293 ] 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "subsystem": "sock", 00:32:09.293 "config": [ 00:32:09.293 { 00:32:09.293 "method": "sock_impl_set_options", 00:32:09.293 "params": { 00:32:09.293 "impl_name": "uring", 00:32:09.293 "recv_buf_size": 2097152, 00:32:09.293 "send_buf_size": 2097152, 00:32:09.293 "enable_recv_pipe": true, 00:32:09.293 "enable_quickack": false, 00:32:09.293 "enable_placement_id": 0, 00:32:09.293 "enable_zerocopy_send_server": false, 00:32:09.293 "enable_zerocopy_send_client": false, 00:32:09.293 "zerocopy_threshold": 0, 00:32:09.293 "tls_version": 0, 00:32:09.293 "enable_ktls": false 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "sock_impl_set_options", 00:32:09.293 "params": { 00:32:09.293 "impl_name": "posix", 00:32:09.293 "recv_buf_size": 2097152, 00:32:09.293 "send_buf_size": 2097152, 00:32:09.293 "enable_recv_pipe": true, 00:32:09.293 "enable_quickack": false, 00:32:09.293 "enable_placement_id": 0, 00:32:09.293 "enable_zerocopy_send_server": true, 00:32:09.293 "enable_zerocopy_send_client": false, 00:32:09.293 "zerocopy_threshold": 0, 00:32:09.293 "tls_version": 0, 00:32:09.293 "enable_ktls": false 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "sock_impl_set_options", 00:32:09.293 "params": { 00:32:09.293 "impl_name": "ssl", 00:32:09.293 "recv_buf_size": 4096, 00:32:09.293 "send_buf_size": 4096, 00:32:09.293 "enable_recv_pipe": true, 00:32:09.293 "enable_quickack": false, 00:32:09.293 "enable_placement_id": 0, 00:32:09.293 "enable_zerocopy_send_server": true, 00:32:09.293 "enable_zerocopy_send_client": false, 00:32:09.293 "zerocopy_threshold": 0, 00:32:09.293 "tls_version": 0, 00:32:09.293 "enable_ktls": false 00:32:09.293 } 00:32:09.293 } 00:32:09.293 ] 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "subsystem": "vmd", 00:32:09.293 "config": [] 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "subsystem": "accel", 00:32:09.293 "config": [ 00:32:09.293 { 00:32:09.293 "method": "accel_set_options", 00:32:09.293 "params": { 00:32:09.293 "small_cache_size": 128, 00:32:09.293 "large_cache_size": 16, 00:32:09.293 "task_count": 2048, 00:32:09.293 "sequence_count": 2048, 00:32:09.293 "buf_count": 2048 00:32:09.293 } 00:32:09.293 } 00:32:09.293 ] 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "subsystem": "bdev", 00:32:09.293 "config": [ 00:32:09.293 { 00:32:09.293 "method": "bdev_set_options", 00:32:09.293 "params": { 00:32:09.293 "bdev_io_pool_size": 65535, 00:32:09.293 "bdev_io_cache_size": 256, 00:32:09.293 "bdev_auto_examine": true, 00:32:09.293 "iobuf_small_cache_size": 128, 00:32:09.293 "iobuf_large_cache_size": 16 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "bdev_raid_set_options", 00:32:09.293 "params": { 00:32:09.293 "process_window_size_kb": 1024 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "bdev_iscsi_set_options", 00:32:09.293 "params": { 00:32:09.293 "timeout_sec": 30 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "bdev_nvme_set_options", 00:32:09.293 "params": { 00:32:09.293 "action_on_timeout": "none", 00:32:09.293 "timeout_us": 0, 00:32:09.293 "timeout_admin_us": 0, 00:32:09.293 "keep_alive_timeout_ms": 10000, 00:32:09.293 "arbitration_burst": 0, 00:32:09.293 "low_priority_weight": 0, 00:32:09.293 "medium_priority_weight": 0, 00:32:09.293 "high_priority_weight": 0, 00:32:09.293 "nvme_adminq_poll_period_us": 10000, 00:32:09.293 "nvme_ioq_poll_period_us": 0, 00:32:09.293 "io_queue_requests": 512, 00:32:09.293 "delay_cmd_submit": true, 00:32:09.293 "transport_retry_count": 4, 00:32:09.293 "bdev_retry_count": 3, 00:32:09.293 "transport_ack_timeout": 0, 00:32:09.293 "ctrlr_loss_timeout_sec": 0, 00:32:09.293 "reconnect_delay_sec": 0, 00:32:09.293 "fast_io_fail_timeout_sec": 0, 00:32:09.293 "disable_auto_failback": false, 00:32:09.293 "generate_uuids": false, 00:32:09.293 "transport_tos": 0, 00:32:09.293 "nvme_error_stat": false, 00:32:09.293 "rdma_srq_size": 0, 00:32:09.293 "io_path_stat": false, 00:32:09.293 "allow_accel_sequence": false, 00:32:09.293 "rdma_max_cq_size": 0, 00:32:09.293 "rdma_cm_event_timeout_ms": 0, 00:32:09.293 "dhchap_digests": [ 00:32:09.293 "sha256", 00:32:09.293 "sha384", 00:32:09.293 "sha512" 00:32:09.293 ], 00:32:09.293 "dhchap_dhgroups": [ 00:32:09.293 "null", 00:32:09.293 "ffdhe2048", 00:32:09.293 "ffdhe3072", 00:32:09.293 "ffdhe4096", 00:32:09.293 "ffdhe6144", 00:32:09.293 "ffdhe8192" 00:32:09.293 ] 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "bdev_nvme_attach_controller", 00:32:09.293 "params": { 00:32:09.293 "name": "nvme0", 00:32:09.293 "trtype": "TCP", 00:32:09.293 "adrfam": "IPv4", 00:32:09.293 "traddr": "127.0.0.1", 00:32:09.293 "trsvcid": "4420", 00:32:09.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:09.293 "prchk_reftag": false, 00:32:09.293 "prchk_guard": false, 00:32:09.293 "ctrlr_loss_timeout_sec": 0, 00:32:09.293 "reconnect_delay_sec": 0, 00:32:09.293 "fast_io_fail_timeout_sec": 0, 00:32:09.293 "psk": "key0", 00:32:09.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:09.293 "hdgst": false, 00:32:09.293 "ddgst": false 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "bdev_nvme_set_hotplug", 00:32:09.293 "params": { 00:32:09.293 "period_us": 100000, 00:32:09.293 "enable": false 00:32:09.293 } 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "method": "bdev_wait_for_examine" 00:32:09.293 } 00:32:09.293 ] 00:32:09.293 }, 00:32:09.293 { 00:32:09.293 "subsystem": "nbd", 00:32:09.293 "config": [] 00:32:09.293 } 00:32:09.293 ] 00:32:09.293 }' 00:32:09.293 10:00:32 -- keyring/file.sh@114 -- # killprocess 85049 00:32:09.293 10:00:32 -- common/autotest_common.sh@936 -- # '[' -z 85049 ']' 00:32:09.293 10:00:32 -- common/autotest_common.sh@940 -- # kill -0 85049 00:32:09.294 10:00:32 -- common/autotest_common.sh@941 -- # uname 00:32:09.294 10:00:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:09.294 10:00:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85049 00:32:09.294 killing process with pid 85049 00:32:09.294 Received shutdown signal, test time was about 1.000000 seconds 00:32:09.294 00:32:09.294 Latency(us) 00:32:09.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.294 =================================================================================================================== 00:32:09.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.294 10:00:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:09.294 10:00:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:09.294 10:00:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85049' 00:32:09.294 10:00:32 -- common/autotest_common.sh@955 -- # kill 85049 00:32:09.294 10:00:32 -- common/autotest_common.sh@960 -- # wait 85049 00:32:10.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:10.668 10:00:33 -- keyring/file.sh@117 -- # bperfpid=85302 00:32:10.668 10:00:33 -- keyring/file.sh@119 -- # waitforlisten 85302 /var/tmp/bperf.sock 00:32:10.668 10:00:33 -- common/autotest_common.sh@817 -- # '[' -z 85302 ']' 00:32:10.668 10:00:33 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:10.668 10:00:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:10.668 10:00:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:10.668 10:00:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:10.668 10:00:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:10.668 10:00:33 -- keyring/file.sh@115 -- # echo '{ 00:32:10.668 "subsystems": [ 00:32:10.668 { 00:32:10.668 "subsystem": "keyring", 00:32:10.668 "config": [ 00:32:10.668 { 00:32:10.668 "method": "keyring_file_add_key", 00:32:10.668 "params": { 00:32:10.668 "name": "key0", 00:32:10.668 "path": "/tmp/tmp.pur33FR1gP" 00:32:10.668 } 00:32:10.668 }, 00:32:10.668 { 00:32:10.668 "method": "keyring_file_add_key", 00:32:10.668 "params": { 00:32:10.668 "name": "key1", 00:32:10.668 "path": "/tmp/tmp.tdFCfxSB47" 00:32:10.668 } 00:32:10.668 } 00:32:10.668 ] 00:32:10.668 }, 00:32:10.668 { 00:32:10.668 "subsystem": "iobuf", 00:32:10.668 "config": [ 00:32:10.668 { 00:32:10.668 "method": "iobuf_set_options", 00:32:10.668 "params": { 00:32:10.668 "small_pool_count": 8192, 00:32:10.668 "large_pool_count": 1024, 00:32:10.668 "small_bufsize": 8192, 00:32:10.668 "large_bufsize": 135168 00:32:10.668 } 00:32:10.668 } 00:32:10.668 ] 00:32:10.668 }, 00:32:10.668 { 00:32:10.668 "subsystem": "sock", 00:32:10.668 "config": [ 00:32:10.668 { 00:32:10.668 "method": "sock_impl_set_options", 00:32:10.668 "params": { 00:32:10.668 "impl_name": "uring", 00:32:10.668 "recv_buf_size": 2097152, 00:32:10.668 "send_buf_size": 2097152, 00:32:10.668 "enable_recv_pipe": true, 00:32:10.668 "enable_quickack": false, 00:32:10.668 "enable_placement_id": 0, 00:32:10.668 "enable_zerocopy_send_server": false, 00:32:10.668 "enable_zerocopy_send_client": false, 00:32:10.668 "zerocopy_threshold": 0, 00:32:10.668 "tls_version": 0, 00:32:10.668 "enable_ktls": false 00:32:10.668 } 00:32:10.668 }, 00:32:10.668 { 00:32:10.668 "method": "sock_impl_set_options", 00:32:10.668 "params": { 00:32:10.668 "impl_name": "posix", 00:32:10.668 "recv_buf_size": 2097152, 00:32:10.668 "send_buf_size": 2097152, 00:32:10.668 "enable_recv_pipe": true, 00:32:10.668 "enable_quickack": false, 00:32:10.668 "enable_placement_id": 0, 00:32:10.668 "enable_zerocopy_send_server": true, 00:32:10.668 "enable_zerocopy_send_client": false, 00:32:10.668 "zerocopy_threshold": 0, 00:32:10.668 "tls_version": 0, 00:32:10.668 "enable_ktls": false 00:32:10.668 } 00:32:10.668 }, 00:32:10.668 { 00:32:10.668 "method": "sock_impl_set_options", 00:32:10.668 "params": { 00:32:10.668 "impl_name": "ssl", 00:32:10.668 "recv_buf_size": 4096, 00:32:10.668 "send_buf_size": 4096, 00:32:10.668 "enable_recv_pipe": true, 00:32:10.669 "enable_quickack": false, 00:32:10.669 "enable_placement_id": 0, 00:32:10.669 "enable_zerocopy_send_server": true, 00:32:10.669 "enable_zerocopy_send_client": false, 00:32:10.669 "zerocopy_threshold": 0, 00:32:10.669 "tls_version": 0, 00:32:10.669 "enable_ktls": false 00:32:10.669 } 00:32:10.669 } 00:32:10.669 ] 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "subsystem": "vmd", 00:32:10.669 "config": [] 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "subsystem": "accel", 00:32:10.669 "config": [ 00:32:10.669 { 00:32:10.669 "method": "accel_set_options", 00:32:10.669 "params": { 00:32:10.669 "small_cache_size": 128, 00:32:10.669 "large_cache_size": 16, 00:32:10.669 "task_count": 2048, 00:32:10.669 "sequence_count": 2048, 00:32:10.669 "buf_count": 2048 00:32:10.669 } 00:32:10.669 } 00:32:10.669 ] 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "subsystem": "bdev", 00:32:10.669 "config": [ 00:32:10.669 { 00:32:10.669 "method": "bdev_set_options", 00:32:10.669 "params": { 00:32:10.669 "bdev_io_pool_size": 65535, 00:32:10.669 "bdev_io_cache_size": 256, 00:32:10.669 "bdev_auto_examine": true, 00:32:10.669 "iobuf_small_cache_size": 128, 00:32:10.669 "iobuf_large_cache_size": 16 00:32:10.669 } 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "method": "bdev_raid_set_options", 00:32:10.669 "params": { 00:32:10.669 "process_window_size_kb": 1024 00:32:10.669 } 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "method": "bdev_iscsi_set_options", 00:32:10.669 "params": { 00:32:10.669 "timeout_sec": 30 00:32:10.669 } 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "method": "bdev_nvme_set_options", 00:32:10.669 "params": { 00:32:10.669 "action_on_timeout": "none", 00:32:10.669 "timeout_us": 0, 00:32:10.669 "timeout_admin_us": 0, 00:32:10.669 "keep_alive_timeout_ms": 10000, 00:32:10.669 "arbitration_burst": 0, 00:32:10.669 "low_priority_weight": 0, 00:32:10.669 "medium_priority_weight": 0, 00:32:10.669 "high_priority_weight": 0, 00:32:10.669 "nvme_adminq_poll_period_us": 10000, 00:32:10.669 "nvme_ioq_poll_period_us": 0, 00:32:10.669 "io_queue_requests": 512, 00:32:10.669 "delay_cmd_submit": true, 00:32:10.669 "transport_retry_count": 4, 00:32:10.669 "bdev_retry_count": 3, 00:32:10.669 "transport_ack_timeout": 0, 00:32:10.669 "ctrlr_loss_timeout_sec": 0, 00:32:10.669 "reconnect_delay_sec": 0, 00:32:10.669 "fast_io_fail_timeout_sec": 0, 00:32:10.669 "disable_auto_failback": false, 00:32:10.669 "generate_uuids": false, 00:32:10.669 "transport_tos": 0, 00:32:10.669 "nvme_error_stat": false, 00:32:10.669 "rdma_srq_size": 0, 00:32:10.669 "io_path_stat": false, 00:32:10.669 "allow_accel_sequence": false, 00:32:10.669 "rdma_max_cq_size": 0, 00:32:10.669 "rdma_cm_event_timeout_ms": 0, 00:32:10.669 "dhchap_digests": [ 00:32:10.669 "sha256", 00:32:10.669 "sha384", 00:32:10.669 "sha512" 00:32:10.669 ], 00:32:10.669 "dhchap_dhgroups": [ 00:32:10.669 "null", 00:32:10.669 "ffdhe2048", 00:32:10.669 "ffdhe3072", 00:32:10.669 "ffdhe4096", 00:32:10.669 "ffdhe6144", 00:32:10.669 "ffdhe8192" 00:32:10.669 ] 00:32:10.669 } 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "method": "bdev_nvme_attach_controller", 00:32:10.669 "params": { 00:32:10.669 "name": "nvme0", 00:32:10.669 "trtype": "TCP", 00:32:10.669 "adrfam": "IPv4", 00:32:10.669 "traddr": "127.0.0.1", 00:32:10.669 "trsvcid": "4420", 00:32:10.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:10.669 "prchk_reftag": false, 00:32:10.669 "prchk_guard": false, 00:32:10.669 "ctrlr_loss_timeout_sec": 0, 00:32:10.669 "reconnect_delay_sec": 0, 00:32:10.669 "fast_io_fail_timeout_sec": 0, 00:32:10.669 "psk": "key0", 00:32:10.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:10.669 "hdgst": false, 00:32:10.669 "ddgst": false 00:32:10.669 } 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "method": "bdev_nvme_set_hotplug", 00:32:10.669 "params": { 00:32:10.669 "period_us": 100000, 00:32:10.669 "enable": false 00:32:10.669 } 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "method": "bdev_wait_for_examine" 00:32:10.669 } 00:32:10.669 ] 00:32:10.669 }, 00:32:10.669 { 00:32:10.669 "subsystem": "nbd", 00:32:10.669 "config": [] 00:32:10.669 } 00:32:10.669 ] 00:32:10.669 }' 00:32:10.669 10:00:33 -- common/autotest_common.sh@10 -- # set +x 00:32:10.669 [2024-04-18 10:00:33.778207] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:32:10.669 [2024-04-18 10:00:33.778792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85302 ] 00:32:10.669 [2024-04-18 10:00:33.967455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.927 [2024-04-18 10:00:34.208154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.495 [2024-04-18 10:00:34.613978] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:11.495 10:00:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:11.495 10:00:34 -- common/autotest_common.sh@850 -- # return 0 00:32:11.495 10:00:34 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:11.496 10:00:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.496 10:00:34 -- keyring/file.sh@120 -- # jq length 00:32:11.755 10:00:34 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:11.755 10:00:34 -- keyring/file.sh@121 -- # get_refcnt key0 00:32:11.755 10:00:34 -- keyring/common.sh@12 -- # get_key key0 00:32:11.755 10:00:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.755 10:00:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:11.755 10:00:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.755 10:00:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.014 10:00:35 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:12.014 10:00:35 -- keyring/file.sh@122 -- # get_refcnt key1 00:32:12.014 10:00:35 -- keyring/common.sh@12 -- # get_key key1 00:32:12.014 10:00:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.014 10:00:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.014 10:00:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.014 10:00:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:12.272 10:00:35 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:12.272 10:00:35 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:12.272 10:00:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:12.272 10:00:35 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:12.531 10:00:35 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:12.531 10:00:35 -- keyring/file.sh@1 -- # cleanup 00:32:12.531 10:00:35 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pur33FR1gP /tmp/tmp.tdFCfxSB47 00:32:12.531 10:00:35 -- keyring/file.sh@20 -- # killprocess 85302 00:32:12.531 10:00:35 -- common/autotest_common.sh@936 -- # '[' -z 85302 ']' 00:32:12.531 10:00:35 -- common/autotest_common.sh@940 -- # kill -0 85302 00:32:12.531 10:00:35 -- common/autotest_common.sh@941 -- # uname 00:32:12.531 10:00:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:12.531 10:00:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85302 00:32:12.531 killing process with pid 85302 00:32:12.531 Received shutdown signal, test time was about 1.000000 seconds 00:32:12.531 00:32:12.531 Latency(us) 00:32:12.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.531 =================================================================================================================== 00:32:12.531 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:12.531 10:00:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:12.531 10:00:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:12.531 10:00:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85302' 00:32:12.531 10:00:35 -- common/autotest_common.sh@955 -- # kill 85302 00:32:12.531 10:00:35 -- common/autotest_common.sh@960 -- # wait 85302 00:32:13.968 10:00:36 -- keyring/file.sh@21 -- # killprocess 85028 00:32:13.968 10:00:36 -- common/autotest_common.sh@936 -- # '[' -z 85028 ']' 00:32:13.968 10:00:36 -- common/autotest_common.sh@940 -- # kill -0 85028 00:32:13.968 10:00:36 -- common/autotest_common.sh@941 -- # uname 00:32:13.968 10:00:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:13.968 10:00:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85028 00:32:13.968 10:00:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:13.968 killing process with pid 85028 00:32:13.968 10:00:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:13.968 10:00:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85028' 00:32:13.968 10:00:37 -- common/autotest_common.sh@955 -- # kill 85028 00:32:13.968 [2024-04-18 10:00:37.007215] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:13.968 10:00:37 -- common/autotest_common.sh@960 -- # wait 85028 00:32:15.872 ************************************ 00:32:15.872 END TEST keyring_file 00:32:15.872 ************************************ 00:32:15.872 00:32:15.872 real 0m19.818s 00:32:15.872 user 0m44.582s 00:32:15.872 sys 0m3.571s 00:32:15.872 10:00:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:15.872 10:00:39 -- common/autotest_common.sh@10 -- # set +x 00:32:15.872 10:00:39 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:32:15.872 10:00:39 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:32:15.872 10:00:39 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:32:15.872 10:00:39 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:32:15.872 10:00:39 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:32:15.872 10:00:39 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:32:15.872 10:00:39 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:32:15.872 10:00:39 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:32:15.872 10:00:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:15.872 10:00:39 -- common/autotest_common.sh@10 -- # set +x 00:32:15.872 10:00:39 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:32:15.872 10:00:39 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:32:15.872 10:00:39 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:32:15.872 10:00:39 -- common/autotest_common.sh@10 -- # set +x 00:32:17.810 INFO: APP EXITING 00:32:17.810 INFO: killing all VMs 00:32:17.810 INFO: killing vhost app 00:32:17.810 INFO: EXIT DONE 00:32:18.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:18.379 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:18.379 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:18.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:18.953 Cleaning 00:32:18.953 Removing: /var/run/dpdk/spdk0/config 00:32:18.953 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:18.953 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:18.953 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:18.953 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:18.953 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:18.953 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:18.953 Removing: /var/run/dpdk/spdk1/config 00:32:18.953 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:18.953 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:18.953 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:18.953 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:18.953 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:18.953 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:18.953 Removing: /var/run/dpdk/spdk2/config 00:32:18.953 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:18.953 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:18.953 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:18.953 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:18.953 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:18.953 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:18.953 Removing: /var/run/dpdk/spdk3/config 00:32:18.953 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:18.953 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:18.953 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:19.219 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:19.219 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:19.219 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:19.219 Removing: /var/run/dpdk/spdk4/config 00:32:19.219 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:19.219 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:19.219 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:19.219 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:19.219 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:19.219 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:19.219 Removing: /dev/shm/nvmf_trace.0 00:32:19.219 Removing: /dev/shm/spdk_tgt_trace.pid58493 00:32:19.219 Removing: /var/run/dpdk/spdk0 00:32:19.219 Removing: /var/run/dpdk/spdk1 00:32:19.219 Removing: /var/run/dpdk/spdk2 00:32:19.219 Removing: /var/run/dpdk/spdk3 00:32:19.219 Removing: /var/run/dpdk/spdk4 00:32:19.219 Removing: /var/run/dpdk/spdk_pid58243 00:32:19.219 Removing: /var/run/dpdk/spdk_pid58493 00:32:19.219 Removing: /var/run/dpdk/spdk_pid58741 00:32:19.219 Removing: /var/run/dpdk/spdk_pid58850 00:32:19.219 Removing: /var/run/dpdk/spdk_pid58906 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59042 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59066 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59230 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59433 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59602 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59710 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59820 00:32:19.219 Removing: /var/run/dpdk/spdk_pid59939 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60050 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60099 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60146 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60219 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60333 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60800 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60879 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60962 00:32:19.219 Removing: /var/run/dpdk/spdk_pid60984 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61137 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61159 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61315 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61331 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61410 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61434 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61503 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61521 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61722 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61767 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61858 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61938 00:32:19.219 Removing: /var/run/dpdk/spdk_pid61984 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62077 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62128 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62178 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62230 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62285 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62335 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62386 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62442 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62493 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62544 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62596 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62647 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62698 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62754 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62803 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62855 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62911 00:32:19.219 Removing: /var/run/dpdk/spdk_pid62959 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63018 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63069 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63121 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63216 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63341 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63687 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63710 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63762 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63794 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63827 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63864 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63895 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63928 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63959 00:32:19.219 Removing: /var/run/dpdk/spdk_pid63990 00:32:19.219 Removing: /var/run/dpdk/spdk_pid64023 00:32:19.219 Removing: /var/run/dpdk/spdk_pid64060 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64091 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64124 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64155 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64186 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64219 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64250 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64281 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64314 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64367 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64403 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64444 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64536 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64589 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64616 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64667 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64689 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64714 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64778 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64809 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64859 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64886 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64912 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64935 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64962 00:32:19.480 Removing: /var/run/dpdk/spdk_pid64989 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65016 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65043 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65093 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65141 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65163 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65213 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65240 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65265 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65327 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65356 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65405 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65430 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65450 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65475 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65500 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65525 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65550 00:32:19.480 Removing: /var/run/dpdk/spdk_pid65575 00:32:19.481 Removing: /var/run/dpdk/spdk_pid65670 00:32:19.481 Removing: /var/run/dpdk/spdk_pid65774 00:32:19.481 Removing: /var/run/dpdk/spdk_pid65950 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66007 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66066 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66111 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66145 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66183 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66232 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66265 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66356 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66406 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66492 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66614 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66712 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66769 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66905 00:32:19.481 Removing: /var/run/dpdk/spdk_pid66969 00:32:19.481 Removing: /var/run/dpdk/spdk_pid67023 00:32:19.481 Removing: /var/run/dpdk/spdk_pid67311 00:32:19.481 Removing: /var/run/dpdk/spdk_pid67452 00:32:19.481 Removing: /var/run/dpdk/spdk_pid67499 00:32:19.481 Removing: /var/run/dpdk/spdk_pid67838 00:32:19.481 Removing: /var/run/dpdk/spdk_pid67885 00:32:19.481 Removing: /var/run/dpdk/spdk_pid68228 00:32:19.481 Removing: /var/run/dpdk/spdk_pid68659 00:32:19.481 Removing: /var/run/dpdk/spdk_pid68947 00:32:19.481 Removing: /var/run/dpdk/spdk_pid69782 00:32:19.481 Removing: /var/run/dpdk/spdk_pid70648 00:32:19.481 Removing: /var/run/dpdk/spdk_pid70781 00:32:19.481 Removing: /var/run/dpdk/spdk_pid70861 00:32:19.481 Removing: /var/run/dpdk/spdk_pid72179 00:32:19.481 Removing: /var/run/dpdk/spdk_pid72446 00:32:19.481 Removing: /var/run/dpdk/spdk_pid72797 00:32:19.481 Removing: /var/run/dpdk/spdk_pid72907 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73046 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73087 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73121 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73161 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73277 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73423 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73605 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73704 00:32:19.481 Removing: /var/run/dpdk/spdk_pid73920 00:32:19.739 Removing: /var/run/dpdk/spdk_pid74028 00:32:19.739 Removing: /var/run/dpdk/spdk_pid74139 00:32:19.739 Removing: /var/run/dpdk/spdk_pid74477 00:32:19.739 Removing: /var/run/dpdk/spdk_pid74885 00:32:19.739 Removing: /var/run/dpdk/spdk_pid74889 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75196 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75211 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75226 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75263 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75275 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75568 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75610 00:32:19.739 Removing: /var/run/dpdk/spdk_pid75911 00:32:19.739 Removing: /var/run/dpdk/spdk_pid76110 00:32:19.739 Removing: /var/run/dpdk/spdk_pid76516 00:32:19.739 Removing: /var/run/dpdk/spdk_pid77018 00:32:19.739 Removing: /var/run/dpdk/spdk_pid77632 00:32:19.739 Removing: /var/run/dpdk/spdk_pid77635 00:32:19.739 Removing: /var/run/dpdk/spdk_pid79589 00:32:19.739 Removing: /var/run/dpdk/spdk_pid79662 00:32:19.739 Removing: /var/run/dpdk/spdk_pid79735 00:32:19.739 Removing: /var/run/dpdk/spdk_pid79808 00:32:19.739 Removing: /var/run/dpdk/spdk_pid79958 00:32:19.739 Removing: /var/run/dpdk/spdk_pid80025 00:32:19.739 Removing: /var/run/dpdk/spdk_pid80096 00:32:19.739 Removing: /var/run/dpdk/spdk_pid80163 00:32:19.739 Removing: /var/run/dpdk/spdk_pid80513 00:32:19.739 Removing: /var/run/dpdk/spdk_pid81691 00:32:19.739 Removing: /var/run/dpdk/spdk_pid81844 00:32:19.739 Removing: /var/run/dpdk/spdk_pid82094 00:32:19.739 Removing: /var/run/dpdk/spdk_pid82670 00:32:19.739 Removing: /var/run/dpdk/spdk_pid82837 00:32:19.739 Removing: /var/run/dpdk/spdk_pid83008 00:32:19.739 Removing: /var/run/dpdk/spdk_pid83109 00:32:19.739 Removing: /var/run/dpdk/spdk_pid83276 00:32:19.739 Removing: /var/run/dpdk/spdk_pid83399 00:32:19.739 Removing: /var/run/dpdk/spdk_pid84090 00:32:19.739 Removing: /var/run/dpdk/spdk_pid84121 00:32:19.739 Removing: /var/run/dpdk/spdk_pid84157 00:32:19.739 Removing: /var/run/dpdk/spdk_pid84524 00:32:19.739 Removing: /var/run/dpdk/spdk_pid84562 00:32:19.739 Removing: /var/run/dpdk/spdk_pid84598 00:32:19.739 Removing: /var/run/dpdk/spdk_pid85028 00:32:19.739 Removing: /var/run/dpdk/spdk_pid85049 00:32:19.739 Removing: /var/run/dpdk/spdk_pid85302 00:32:19.739 Clean 00:32:19.739 10:00:43 -- common/autotest_common.sh@1437 -- # return 0 00:32:19.739 10:00:43 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:32:19.739 10:00:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:19.739 10:00:43 -- common/autotest_common.sh@10 -- # set +x 00:32:19.998 10:00:43 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:32:19.998 10:00:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:19.998 10:00:43 -- common/autotest_common.sh@10 -- # set +x 00:32:19.998 10:00:43 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:19.998 10:00:43 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:19.998 10:00:43 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:19.998 10:00:43 -- spdk/autotest.sh@389 -- # hash lcov 00:32:19.998 10:00:43 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:19.998 10:00:43 -- spdk/autotest.sh@391 -- # hostname 00:32:19.998 10:00:43 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:20.256 geninfo: WARNING: invalid characters removed from testname! 00:32:46.798 10:01:09 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:50.080 10:01:12 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:52.041 10:01:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:55.323 10:01:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:57.225 10:01:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:00.580 10:01:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.114 10:01:26 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:03.114 10:01:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.114 10:01:26 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:03.114 10:01:26 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.114 10:01:26 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.114 10:01:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.114 10:01:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.114 10:01:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.114 10:01:26 -- paths/export.sh@5 -- $ export PATH 00:33:03.114 10:01:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.114 10:01:26 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:03.114 10:01:26 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:03.114 10:01:26 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713434486.XXXXXX 00:33:03.114 10:01:26 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713434486.eUmn2g 00:33:03.114 10:01:26 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:03.114 10:01:26 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:03.114 10:01:26 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:03.114 10:01:26 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:03.114 10:01:26 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:03.114 10:01:26 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:03.114 10:01:26 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:33:03.115 10:01:26 -- common/autotest_common.sh@10 -- $ set +x 00:33:03.115 10:01:26 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-uring' 00:33:03.115 10:01:26 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:33:03.115 10:01:26 -- pm/common@17 -- $ local monitor 00:33:03.115 10:01:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.115 10:01:26 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=87024 00:33:03.115 10:01:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.115 10:01:26 -- pm/common@21 -- $ date +%s 00:33:03.115 10:01:26 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=87026 00:33:03.115 10:01:26 -- pm/common@26 -- $ sleep 1 00:33:03.115 10:01:26 -- pm/common@21 -- $ date +%s 00:33:03.115 10:01:26 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713434486 00:33:03.115 10:01:26 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713434486 00:33:03.115 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713434486_collect-vmstat.pm.log 00:33:03.115 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713434486_collect-cpu-load.pm.log 00:33:04.056 10:01:27 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:33:04.056 10:01:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:04.056 10:01:27 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:04.056 10:01:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:04.056 10:01:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:04.056 10:01:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:04.056 10:01:27 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:04.056 10:01:27 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:04.056 10:01:27 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:04.056 10:01:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:04.056 10:01:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:04.056 10:01:27 -- pm/common@30 -- $ signal_monitor_resources TERM 00:33:04.056 10:01:27 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:33:04.056 10:01:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.056 10:01:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:04.056 10:01:27 -- pm/common@45 -- $ pid=87032 00:33:04.056 10:01:27 -- pm/common@52 -- $ sudo kill -TERM 87032 00:33:04.056 10:01:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.056 10:01:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:04.056 10:01:27 -- pm/common@45 -- $ pid=87033 00:33:04.056 10:01:27 -- pm/common@52 -- $ sudo kill -TERM 87033 00:33:04.315 + [[ -n 5093 ]] 00:33:04.315 + sudo kill 5093 00:33:04.325 [Pipeline] } 00:33:04.345 [Pipeline] // timeout 00:33:04.352 [Pipeline] } 00:33:04.370 [Pipeline] // stage 00:33:04.379 [Pipeline] } 00:33:04.403 [Pipeline] // catchError 00:33:04.412 [Pipeline] stage 00:33:04.414 [Pipeline] { (Stop VM) 00:33:04.428 [Pipeline] sh 00:33:04.709 + vagrant halt 00:33:08.895 ==> default: Halting domain... 00:33:14.176 [Pipeline] sh 00:33:14.455 + vagrant destroy -f 00:33:18.736 ==> default: Removing domain... 00:33:18.748 [Pipeline] sh 00:33:19.028 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:33:19.038 [Pipeline] } 00:33:19.056 [Pipeline] // stage 00:33:19.063 [Pipeline] } 00:33:19.080 [Pipeline] // dir 00:33:19.085 [Pipeline] } 00:33:19.103 [Pipeline] // wrap 00:33:19.111 [Pipeline] } 00:33:19.128 [Pipeline] // catchError 00:33:19.137 [Pipeline] stage 00:33:19.139 [Pipeline] { (Epilogue) 00:33:19.154 [Pipeline] sh 00:33:19.477 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:26.053 [Pipeline] catchError 00:33:26.055 [Pipeline] { 00:33:26.070 [Pipeline] sh 00:33:26.350 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:26.351 Artifacts sizes are good 00:33:26.360 [Pipeline] } 00:33:26.378 [Pipeline] // catchError 00:33:26.388 [Pipeline] archiveArtifacts 00:33:26.394 Archiving artifacts 00:33:26.521 [Pipeline] cleanWs 00:33:26.533 [WS-CLEANUP] Deleting project workspace... 00:33:26.533 [WS-CLEANUP] Deferred wipeout is used... 00:33:26.539 [WS-CLEANUP] done 00:33:26.540 [Pipeline] } 00:33:26.558 [Pipeline] // stage 00:33:26.564 [Pipeline] } 00:33:26.580 [Pipeline] // node 00:33:26.586 [Pipeline] End of Pipeline 00:33:26.633 Finished: SUCCESS